In this era of AI innovation, organizations are pouring tens of billions of dollars into AI development. However, for all the money invested in capabilities, there is no corresponding investment in AI management.
Some companies may take the position that when the world’s governments release AI regulations, that is the right time to tackle AI programs with a management structure that can address complex topics. such as privacy, transparency, accountability, and fairness. In the meantime, the business can focus solely on AI performance.
The wheels of regulation are now in motion. However, regulations move at the speed of bureaucracy, and AI innovation only accelerates AI already being deployed at scale, and we are rapidly approaching a point where AI capabilities will exceed effective that makes the rules, which puts the responsibility for self-regulation in the hands of business leaders.
The solution to this puzzle is for organizations to find a balance between compliance with existing rules and self-regulation. Some companies are stepping up to the responsible AI challenge: Microsoft has the Office of Responsible Use of AI, Walmart a Digital Citizenship team, and Salesforce an Office of Ethical and Human Use of Technology. However, many organizations must quickly embrace a new era of AI self-regulation.
The business value of self-regulation
Government bodies cannot look at every business, understand at a technical level what AI programs are emerging, predict potential issues that may result, and then quickly create rules to prevent problems before they happen. That’s an unattainable regulatory scenario—and no business wants one in any case. Instead, each business has an enthusiastic view of its own AI efforts, putting it in the best position to address AI issues as they are identified.
While government regulations are enforced with fines and litigation, the consequences of failing to self-regulate tend to be more severe.
Imagine an AI tool deployed in a retail setting that uses CCTV feeds, customer data, real-time behavioral analysis, and other data to predict what a shopper is likely to buy when a employee uses a particular sales technique. AI also forms customer personas that are stored and updated for targeted advertising campaigns. The AI tool itself was purchased from a third-party vendor and is one of dozens of AIs deployed throughout the retailer’s operations.
Emerging regulations may dictate how customer data is stored and transferred, whether consent is required before data is collected, and whether the tool can be proven fair in its predictions. The considerations are valid, but they are not comprehensive from a business perspective. For example, has the AI vendor and its tools been vetted for security gaps that could harm connected business technologies? Does the staff have the necessary training and documented responsibilities needed to use the tool properly? Do customers know that AI is used to create a detailed persona that is stored in another location? Should they know?
The answers to these types of questions can affect the business in terms of security, efficiency, ROI on technology investments, and brand reputation, among others. This hypothetical case reveals how failure to self-regulate AI programs exposes the organization to many potential problems—many of which are likely to be beyond the purview of government regulation. The best way forward with AI is shaped by management.
Governance for trust in AI
No two companies and AI use cases are the same, and in the era of self-regulation, business is called upon to evaluate whether the tools it uses can be deployed safely, ethically, and in accordance with company values and existing or tangential rules. In short, businesses need to know if AI can work trust.
Trust as a governance lens affects more than the commonly cited concerns of AI, such as the potential for discrimination and threats to the security of personal data. As I mentioned in my book, Reliable AItrust also applies to things like reliability over time, transparency to all stakeholders, and accountability baked into the entire AI lifecycle.
Not all of these factors are relevant to every organization. An AI that automates trade reconciliation probably does not pose a threat of discrimination, but the security of the model and the underlying data is important. In contrast, data security is relatively less about predictive AI used to anticipate food and housing insecurity, but inequality and discrimination are priority considerations for a tool that relies on historical data. which may be full of hidden bias.
Effective AI self-regulation requires a full life cycle approach, where attention to trust, behavior, and outcomes is included at each stage of the project. Processes must be changed to set clear waypoints for decision-making. Employees must be educated and trained to contribute to the management of AI, with a solid understanding of the tools, their impact, and individual employee responsibilities in the life cycle. And the technology ecosystem of edge devices, cloud platforms, sensors, and other tools must all be aligned to foster the qualities of trust that matter most in a given deployment.
Self-regulation fills the gap between innovation and government-made rules. Not only does this put the business on a path to meet any regulations that may arise in the future, but it also provides significant business value by maximizing investment and minimizing negative consequences.
For all that we spend on building AI capabilities, we must also look at investing in how we manage and use these tools to their full potential in a reliable way—and we can’t wait for that. tell governments how.
Beena Ammananth is the executive director of the Global Deloitte AI Institute.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of luck.
More must read COMMENTS published by luck:
Learn how to navigate and build trust in your business with The Trust Factor, a weekly newsletter that explores what leaders need to succeed. Sign up here.