Q&A: Rounds on regulating AI, which will impact ‘every single aspect of our lives’

(Official U.S. Senate photo by Dan Rios)

John Hult/South Dakota Searchlight

Talk of artificial intelligence is all but unavoidable in 2023.

The emergence of tools like ChatGPT for text generation and Midjourney for art have pushed the technology into the public consciousness in a way voice recognition like the iPhone’s Siri or autocomplete features on a word processor never did.

AI is an umbrella term that generally refers to programs that can ingest and analyze massive datasets to draw conclusions or perform tasks. Ask ChatGPT a question, and it uses what it’s learned from billions of words of “training data” to generate an answer within seconds.

In March, a group of tech leaders signed an open letter urging U.S. companies to pause on AI development over concerns it was advancing more rapidly than regulators could keep up with.

Republican Sen. Mike Rounds of South Dakota has become a key player in the AI conversation in recent months. He’s part of a bipartisan four-senator group conducting a series of meetings on AI in national defense, copyright and health care, the most famous of which was a closed-door meeting with tech luminaries including Elon Musk of Tesla, Eric Schmidt of Google and Sam Altman of Open AI, the company behind ChatGPT.

On Thursday, after participating in an AI panel discussion at the annual South Dakota Biotech summit and annual meeting in Sioux Falls, Rounds answered questions from South Dakota Searchlight on the path forward for Congress.

Why is this issue so important to you?

It’s been around for a long time, but it’s now something that people are concerned with, and it’s not going to go away. Every single aspect of our lives, whether you’re talking about social media, electricity, telecommunications, financial services – all of those are impacted by the application of artificial intelligence.

I got involved in it because of my ties with the Cyber Subcommittee on the Armed Services Committee. We started seeing how much artificial intelligence was impacting cyber operations. It became very clear that we could not hire enough specific cyber experts without having AI as a tool for them to use to stay on top of the attacks.

The other part of this is health care. Last year, there was a report to the Department of Defense on artificial intelligence and its impact on our defense. There was a classified portion that I got because I was on the cyber subcommittee, and I saw all the things we could do with regard to health care, like the ability to really address cancer, which is personal to me [Rounds’ wife, Jean, died from cancer in 2021].

It became evident that the information in that classified report was not being disseminated to the people who could actually fund it, who could look at the upside to artificial intelligence. That really became a part of what drove me.

I actually got the Senate to change its rules so that every single office had an employee who could review items that were identified as classified in (the report). Doing so meant that staff members could go back to their members, tell them to get the report and to integrate it into the work that they were doing in their committees, whether that was the Appropriations Committee, the Armed Services Committee, or the Commerce Committee or the Judiciary Committee.

Can I ask you about the group of four senators working on this issue? What have you done so far, and what are the next steps? What’s the timeline there?

We’ve established a series of meetings with tech experts, AI experts. We had one series with all of those recognized names within the industry, and then we’ve been holding other smaller informational groups, where all the members [of the Senate] are welcome.

We did one on national defense, which was classified, and then we’re doing one now where innovation is going to be part of it, and Dr. José-Marie Griffiths [of Dakota State University] will be actively involved. I’m doing one when we get back [to Washington D.C.] on cancer and on cancer research, one doing kind of a moonshot on cancer with artificial intelligence.

We’ve got a series of these things, about 12 of them, on and off over the next four months, and we’re always doing it on a bipartisan basis.

With social media, some want Congress to force companies to open up the black box and show us the algorithms. If we’re here at the ground floor with generative AI, do you suspect that these regulations will include some measure of transparency?

There are two types of databases. One is the open databases, or open AI. Those will be used in our universities, and it’s going to be a lot harder to control because people can use them as they wish. Then there’ll be proprietary databases, which companies will have, and they’re not going to want to share the information on those.

In both cases, we have challenges on how we integrate appropriate regulatory directives. We’ve got to be careful, because AI is something the entire world is following right now. If we overregulate, they’ll simply move outside of the United States. If you chase them off campus, so to speak, then they’re unregulated entirely.

Most of the companies we’ve talked to want guidelines and a framework that helps to promote AI, but that also identifies good actions from unacceptable actions. And in doing so, they believe we’ve got our best shot at getting people to stay here and to develop here. Other countries might very well try to overregulate it, and all they’re going to do is chase that business back into the United States.

So it is a challenge, but that’s the reason why we want to kind of go committee by committee. Each committee understands what’s important to the industries that they regulate, but also with regard to the existing regulation that should still be enforced.

Just because you’re using a new tool, that doesn’t mean you’ll somehow get away with doing something that you couldn’t do beforehand. If it’s illegal to plagiarize or to use a patent without permission, it’s still illegal if you do it with AI.

How can we be confident that the industry isn’t writing these regulations?

There are people who are technicians. It’s kind of like when we determine industry standards for the particular type of pipe or the strength of a pipe. We go to the (American Society for Testing and Measures) ASTM. It’s the institute that actually looks at the standards that our manufacturing processes are based on. This has been going on for decades. It’s something that all the people within the industry understand. The way that I look at it, let’s do the same thing when it comes to AI.

If we can get them to act as referees, then the players know there’s actually somebody there with technical expertise that will challenge them if they try to get away with something.

Someone from Avera said Thursday that the process of getting FDA approval for a new use of AI in health care makes it unaffordable. Is there a place for any regulatory changes that would maybe allow people in South Dakota and around the country to be a little bit more maneuverable?

I can’t speak specifically to that instance, but I can tell you that as AI is being developed, it’s actually going to be used to test and to look at other AI products.

As the federal government gets more into it, the FDA could actually contract with outside organizations that are specifically designing AI tools to look at these, and to confirm or recommend approval.

But you’re still going to have to have a regulatory process that takes extra steps to make sure that the American people feel comfortable that these are safe to use, and do more good than harm.

Are there concerns about displacing people like agronomists or other experts with AI in agriculture? Or do you think it’ll go the other way?

I think it’s going to make those agronomists even more valuable, because with more tools being available, they’re going to be able to walk in and to use these tools and actually show a dramatic change in the profitability for those farmers or ranchers.

A machine is probably not going to start by going through everything that needs to be done with a farmer. Someone’s going to help program the systems and help the farmers actually integrate AI into their machinery.

The best thing I can equate to is that it used to be when you were flying an airplane, you had an instrument landing system, what you call VOR systems, to fly from point A to point B.

Now we have GPS. But included in GPS are huge new numbers of approaches you can do. Somebody’s got to install (these systems) and somebody’s got to service them. Those guys are doing better now than they ever did before.

You’ve talked about restricting “bad guys” from using the computing power we have in the U.S. to run the algorithms. How do you go about stopping a bad guy from contracting with an international player like Amazon Web Services?

We do it now through our trade agreements, and through the Department of Commerce and Homeland Security. It’s being done today.

But even more importantly, it’s the chips that are created by some of the most advanced chip makers in the world that we contract with, that we have, and China does not have. We have to restrict China’s access to those most advanced chips, and we’re doing that today.

Sources vary, anywhere from six months to a year and a half, as far as our standing ahead of China in terms of development. That’s not a lot of time, but it’s enough to keep them from being super aggressive with their military capabilities. They already harass our aircraft in the open skies area around Taiwan. They already harass our ships in the free shipping areas, because they’re trying to make life miserable for people that believe in free economic movement on the high seas.

For us, if we can keep a step ahead of China, then we can keep the peace.