Friday, April 12, 2024

NewsHorizon

Where your horizon expands every day.

European

Managing artificial intelligence through collaborative, protected, and environmentally-friendly methods.


The emergence of personal computers and the Internet sparked technological advancements that drastically transformed our society. The rise of artificial intelligence (AI) as a widely-used tool marks a significant turning point, propelling us into unfamiliar territory and prompting us to contemplate the most effective path forward in this era-shaping undertaking.

The solution can be found by recognizing the enormity of the obstacles, uncertainties, and potential benefits of AI. This can be achieved through promoting sustainable collaboration that ethically unlocks the full potential of AI for the betterment of society.

This is not only a concern for government officials and business executives, but also for countries and individuals worldwide.

As the regulation of AI faces the risk of becoming fragmented across different countries, it is crucial to pursue collaborative synergies, similar to those established by organizations like the US EU Tech and Trade Council, OECD, WEF, G7, and others in this field. This is not only a responsibility for policymakers and industry leaders, but also for nations and their citizens on a global level. It is important to recognize that this is a massive and highly significant undertaking.

Commit to Collaborate

As decision-makers contemplate the necessary regulations for AI in the future, they should adopt a framework based on the Three S’s – Shared, Secure, and Sustainable. At Dell Technologies, we are implementing these guiding principles in our approach to AI. They can also aid in promoting responsible governance of AI and leveraging its vast potential for positive change.

• Shared represents an integrated, multi-sector and global approach built in alignment with existing tech policies and compliance regulations such as those governing privacy.

• Secure means focusing on security and trust at every level – from infrastructure to the output of machine learning models. Always ensuring AI remains a force for good but is also protected from threats and treated like the extremely high-value asset it is.

• Sustainable demonstrates an opportunity to harness AI while protecting the environment, minimizing emissions and prioritizing renewable energy sources. AI represents the most intensive and demanding technology we’ve ever seen, and we must invest as much in making it sustainable as creating AI itself.

Make It Shared

John Roese is the Global Chief Technology Officer at Dell Technologies.
| via Dell Technologies

Dell has always prioritized offering choices and supporting open ecosystems. This has allowed us to successfully navigate through changes in technology, including the emergence of AI. To ensure effective regulation of AI, it is important to establish a unified global infrastructure that promotes collaboration and reduces costs for the entire digital community. Regulations should be integrated with existing legislative tools, such as the NIST AI Risk Management Framework and AI Bill of Rights, to minimize conflicting regulations and enforcement. By implementing this approach, we can avoid AI operating independently from other major technology areas, such as privacy, data, cloud computing, and security. This approach has been advocated by organizations like the Conference on Fairness, Accountability, and Transparency. AI policies should not be developed in isolation. As a company, we have actively engaged with global policy networks, such as the OECD’s AI Community and Business Round Table, and believe that involving multiple sectors in the conversation is crucial for a positive path forward.

Make it Secure

In the near future, there will be a large increase in the availability of Large Language Models (LLMs) that are both open and closed-source. These models will be used for various purposes globally. Many businesses will use their own closed LLMs to effectively utilize their data while maintaining security. The difference between open and closed source is important, so it is crucial that we carefully assess the strengths and risks of each approach within regulatory guidelines. This will allow us to fully utilize their potential and avoid unnecessary limitations.

We are of the opinion that in the future, the worth and potential dangers of AI will require the implementation of a zero trust approach.

Dell has been a strong advocate for implementing strict security measures to prevent, identify, and address attacks in traditional computing. We are also dedicated to expediting the implementation of true zero trust architectures as a means of establishing a new IT security standard. This mindset also applies to securing AI, as we are committed to constantly researching ways to safeguard systems and users from persistent threats. We believe that as the value and risks associated with AI increase, the adoption of zero trust will become necessary. As we work towards this goal, our products and solutions will incorporate more zero trust principles into their design. Our recent collaboration with NVIDIA to introduce Dell Generative AI Solutions is a prime example of how cutting-edge tools can prioritize strong security measures from the beginning.

One aspect of ensuring security in AI involves establishing frameworks of trust for the technology. This can be achieved by implementing disclosure and transparency practices on a global scale. Due to the complexity of most AI systems, transparency guidelines should focus on providing information about the data used, its creators, and the tools utilized, rather than attempting to explain the inner workings of the technology. While AI may be intricate, creating trustworthy ecosystems is a practical way to build trust with users of AI systems.

Make it Sustainable

At Dell Technologies, our goal is to provide customers with the most efficient, effective, and sustainable AI infrastructure that aligns with their deployment objectives. This involves incorporating the appropriate architecture and technology to meet their requirements.

We all have a duty to influence the future of AI by carefully regulating it.

It is widely known that advanced technologies demand higher levels of power. In addition to acknowledging this reality, we are actively taking measures to enhance product energy efficiency, implement sustainable solutions for data centers, and utilize sustainable materials whenever feasible. It is crucial to establish similar protocols for the infrastructure of AI hardware, which can uphold industries to rigorous standards and encourage innovation. As AI data processing increases, so does the demand for energy and performance in data centers. Through strategies such as smart scaling and more efficient processing, we are already working to decrease our own consumption. By further compounding these efforts and utilizing renewable sources across various industries, we can ensure that AI models address more challenges than they create.

Closing the Loop

The rapid progress of mainstream AI requires a new mindset of cooperation. This has been demonstrated through partnerships formed between the US government and industry leaders to ensure transparency of AI-generated content and improved security for users. By establishing proper guidelines based on universally accepted principles (similar to the climate governance set by the World Economic Forum), we can ensure that AI technologies are developed in a responsible, ethical manner with consideration for potential risks. It is our shared responsibility to shape the future of AI through careful regulation that balances innovation, societal well-being, and the protection of individual rights. No AI project relies on a single technology. It involves multiple components such as storage, networking, computing, and integration, all of which must prioritize security and trust. By aligning these elements, we can maximize the full potential of AI to benefit everyone.