They’re making good progress on this and anticipate having that framework out by the beginning of 2023. There are some nuances here—different people interpret risk differently, so it’s important to come to a common understanding of what risk is and what appropriate approaches to risk mitigation might be, and what potential harms might be.
You’ve talked about the issue of bias in AI. Are there ways that the government can use regulation to help solve that problem?
There are both regulatory and nonregulatory ways to help. There are a lot of existing laws that already prohibit the use of any kind of system that’s discriminatory, and that would include AI. A good approach is to see how existing law already applies, and then clarify it specifically for AI and determine where the gaps are.
NIST came out with a report earlier this year on bias in AI. They mentioned a number of approaches that should be considered as it relates to governing in these areas, but a lot of it has to do with best practices. So it’s things like making sure that we’re constantly monitoring the systems , or that we provide opportunities for recourse if people believe that they’ve been harmed.
It’s making sure that we’re documenting the ways that these systems are trained, and on what data, so that we can make sure that we understand where bias could be creeping in. It’s also about accountability, and making sure that the developers and the users, the implementers of these systems, are accountable when these systems are not developed or used appropriately.
What do you think is the right balance between public and private development of AI?
The private sector is investing significantly more than the federal government into AI R&D. But the nature of that investment is quite different. The investment that’s happening in the private sector is very much into products or services, whereas the federal government is investing in long- term, cutting-edge research that doesn’t necessarily have a market driver for investment but does potentially open the door to brand-new ways of doing AI. So on the R&D side, it’s very important for the federal government to invest in those areas that don’t have that industry-driving reason to invest.
Industry can partner with the federal government to help identify what some of those real-world challenges are. That would be fruitful for US federal investment.
There is so much that the government and industry can learn from each other. The government can learn about best practices or lessons learned that industry has developed for their own companies, and the government can focus on the appropriate guardrails that are needed for AI.