In the sea of published commentary on ChatGPT and the other fun AI platforms that generate images and text, one message recurs: companies must undertake governance to ensure their adoption of AI is responsible. This two-part article series addressed the practical realities of AI governance, with observations from front-line practitioners. Part one offered tips from a leading AI language and recommendations provider on forging a corporate culture to address AI risks, including toxic language output and misuse of available data sets. Part two presented top priorities for companies trying to craft AI compliance programs and discussed the brand-new market for automated AI governance platforms, including insights from experts at IBM, the AI Responsibility Lab and PwC. It also shared findings from an Accenture report.