Regulating AI through shared, secure and sustainable approaches – POLITICO

The birth of the PC and the Internet created digital revolutions that changed our world. The advent of mainstream artificial intelligence (AI) is another inflection point that has ushered us into uncharted territories, leaving us to consider how best to move forward on this next generation-defining journey.

The answer lies in acknowledging the magnitude of the challenges, risks and potential rewards of AI. And in fostering long-lasting cooperation that unlocks AI’s true value ethically and for the benefit of all.

This isn’t just a matter for policymakers and industry leaders, but for nations and citizens on a global scale.

With regulatory approaches to AI in danger of fragmenting across borders, this is the time to seek out collaborative synergies, similar in spirit to what has been developed by the US EU Tech and Trade Council, OECD, WEF, the G7 and others in this sphere. This isn’t just a matter for policymakers and industry leaders, but for nations and citizens on a global scale. No one should be under any illusion – this is an enormous and critically important task.

Commit to Collaborate

As policymakers consider the requisite AI governance for the coming years, they should take an approach underpinned by the Three S’s—Shared, Secure and Sustainable. We are using these guiding principles in our approach to AI at Dell Technologies. They can also support the way we drive responsible regulation of AI, while harnessing its immense, innovative potential for the betterment of all.

Shared represents an integrated, multi-sector and global approach built in alignment with existing tech policies and compliance regulations such as those governing privacy.

Secure means focusing on security and trust at every level – from infrastructure to the output of machine learning models. Always ensuring AI remains a force for good but is also protected from threats and treated like the extremely high-value asset it is.

Sustainable demonstrates an opportunity to harness AI while protecting the environment, minimizing emissions and prioritizing renewable energy sources. AI represents the most intensive and demanding technology we’ve ever seen, and we must invest as much in making it sustainable as creating AI itself.

Make It Shared

John Roese, Global Chief Technology Officer, Dell Technologies
| via Dell Technologies

At Dell, we’ve always believed in providing choice and open ecosystems. It’s enabled us to see our customers through the various evolutions of emerging technology, including AI. To ensure a seamless and collaborative approach to AI regulation, we should establish an integrated, streamlined global infrastructure that can benefit the entire digital ecosystem and reduce cost. Regulations should seek to integrate with existing legislative tools like the NIST AI Risk Management Framework and AI Bill of Rights to help minimize divergent regulatory and enforcement fragmentation. By applying this approach effectively, we can avoid AI operating in isolation from other major tech domains such as privacy, data, cloud and security. We have seen this approach championed by the Conference on Fairness, Accountability, and Transparency among others. AI policy should not be created in a vacuum. As a business, we have engaged with global policy networks like the OECD through their AI Community and Business Round Table and see great value in multi-sector consultation to help pave a positive way forward.

Make it Secure

There will soon be a significant proliferation of open and closed-source Large Language Models (LLMs) for specialized and general uses across the globe. Many enterprises will independently employ their own closed LLMs to securely unleash the power of their data. The distinction between open and closed source is significant. Consequently, it’s critical, within the realms of regulation, that we consider the strengths and risks of both approaches to ensure we meet their respective potential and avoid unnecessary restrictions.

We believe that over the long term the value and risk of AI will necessitate zero trust adoption.

Dell has long championed robust security standards to help prevent, detect and remediate attacks across traditional computing. Additionally, we have committed to accelerating the delivery of true zero trust architectures as a path to a new IT security paradigm. Our ethos on this approach naturally extends to facilitate AI’s security, where we remain committed to continuously studying how we can protect systems and users from persistent risk. We believe that over the long term the value and risk of AI will necessitate zero trust adoption. As we move to that end state, our products and solutions will continuously add more zero trust principles to systems by design. Our recent partnership with NVIDIA to launch Dell Generative AI Solutions is a good example of how advanced tools can feature robust security from the outset.

An additional element of AI security is the need to create frameworks of trust for the technology. To do this, we believe appropriate disclosure and transparency approaches should be developed and used globally. Because of the inherent complexity of most AI systems, transparency rules should present what data did the AI system use, who created it, and what tools were used – not trying to explain the inner workings of the technology. AI is complex but establishing ways to trust the ecosystem that created it is an accessible way to build trust with the users of AI systems.

Make it Sustainable

At Dell Technologies, we believe in delivering the most efficient, most effective, most sustainable AI infrastructure for what customers are trying to deploy. This means integrating with the right architecture and technology to support their needs.

It is our collective responsibility to shape the future of AI through thoughtful regulation.

We already understand that advanced technologies require increasing levels of power. Going beyond recognizing that fact, we’re taking active steps to improve product energy efficiency, sustainable data center solutions, and sustainable materials wherever possible. We must establish similar protocols for AI hardware infrastructure. Protocols that can hold industries to high standards and support innovation. With increased AI data processing, comes increased data center energy use and performance requirements. Using solutions like smart scaling and more efficient processing, we’re already working to cut our own consumption. Compounding those efforts while leveraging significant renewable sources across industries can ensure AI models solve more challenges that they create.

Closing the Loop

The rapid advancement of mainstream AI necessitates a new spirit of collaboration. We’ve already seen this through the commitments forged between the US government and industry players towards the disclosure of AI generated content and enhanced security for end users. By designing the right guardrails, guided by universally agreed-upon principles (similar to the climate governance established by the World Economic Forum) – we can ensure AI technologies are developed responsibly, ethically, and with due consideration for the potential risks they pose. It is our collective responsibility to shape the future of AI through thoughtful regulation that balances innovation, societal well-being, and the preservation of individual rights. There is no AI project in the world that is based on a single piece of technology. It will encompass storage and networking, compute, and requisite integration. All of which must be underpinned by security and trust. By getting all these elements aligned, we maximize the mass potential of AI to benefit all.

Scroll to Top