The European Union passed a new law about artificial intelligence (AI). After passing this law, people across the world are wondering if it will set the global standard. Same as the GDPR (European Union’s data privacy) law did in the past. This law changed how the United States handled the privacy of its data, especially when the federal laws did not work. Now many are pondering the question if the same thing will happen with these new AI laws.
So far, the United States is refusing to follow the EU’s newly designed AI laws. Furthermore, tech companies are pushing to follow the easier and cheaper rules that do not provide the guarantee of data privacy. AI laws were passed in Colorado and Utah, and bills were proposed in Oklahoma and Connecticut. These new rules seem to protect the people more as compared to the past.
What is the major difference between AI and a stats bill?
The main difference between AI and stats bills is in their scope. Artificial intelligence takes a wide approach to protecting human rights. It used the risk-based system to regulate AI. It bans many uses of AI. Some of them are ranking people based on their family members or education. However, there are few requirements in lower-risk AI systems.
State bills, like the ones in Colorado and Connecticut, have a smaller focus. They also use a system that looks at the risks of AI, but only for AI that affects important services like education or jobs. These bills do not put any restrictions on certain AI uses. For example, Connecticut’s bill would prevent political deepfakes. But it does not stop their creation. Also, the strategies that AI is explained in these United States bills are not similar to how it is explored in the AI Act.
Furthermore, there are many similarities in AI laws in Connecticut and Colorado and the European AI Act. Especially in the rules for creating high-risk AI systems. However, these state laws are closely aligned to a model AI bill. It was created by a company named Workday. This institute makes software for managing the workforce and finances. Workday’s model, mentioned in a March article by The Record, explores the responsibilities of artificial intelligence developers and those who utilize the technology. Their main focus is on the system that makes crucial decisions.
The state laws and Workday’s bill document requirements are very similar. Especially in calling for an impact assessment when designing AI systems. This model has great influence on laws, especially in states like California, Illinois, and New York as well. One of the spokespersons from the Workday company said that his institute is actively helping shape AI policies. It protects customers while encouraging innovation, by offering technical advice based on discussions with policymakers worldwide.
The tech industry has a great influence on AI regulations. In Connecticut, tech companies succeeded in their lobbying efforts. They smarty remove a section of the AI Act from a draft bill. On the other hand, some big tech companies have shown great support for the bill, but it remains stalled. Industry groups argue that the bill would restrict innovation. This led Governor Ned Lamont to threaten a veto. Some other states such as Colorado, are also facing the problems of delays. They also make plans to revise their Artificial Intelligence bills to save innovation.
Due to the advancement of debate at the federal level, big tech companies have a strong influence on AI laws. In this conversation, tech companies and the Senate are heavily involved. States are worried that stricter rules of AI laws might compel tech companies to places with easier laws. However, the level of concern is less when exploring the data protection laws like GDPR.
For each state, lobbying groups support one national AI law instead of various regulations. This is the same view that big tech companies also share publicly. However, many of the companies show strong disagreement with both national and state law. If regulators are not successful in creating any laws, AI institutes will maintain the status quo. It creates the hope of different rules in the United States and the European Union.
While some US companies may find it useful to follow EU rules, this would leave the US less regulated overall, offering less protection against AI abuses. The EU’s AI Act has remained strong despite lobbying efforts. It remains uncertain whether state laws in the US will continue to provide a consistent approach to AI regulation.