Connect with us

Hi, what are you looking for?

Opinion

How business and government can put trust at the centre of our AI future

Zuko Mdwaba
Zuko Mdwaba

Zuko Mdwaba, Salesforce Area VP / Africa Executive

Generative AI is revolutionising the way we live and work. Across industries, leaders and users alike are experimenting and tapping into the power of these technologies to their advantage.

Nearly 7 in 10 workers say generative AI will help them better serve customers.

But like any new technology, generative AI is not without risks. Unlike consumer AI, like Apple’s Siri and Amazon Alexa, enterprise customers require higher levels of trust and security, especially in regulated industries.

When working with the world’s leading businesses, it’s critical to explore this technology in an intentional and responsible way so that ethics remain top of mind for customers, addressing concerns around data ethics, privacy and control of their data.

It’s encouraging to see governments begin to take definitive action to ensure trustworthy AI. Businesses are eager for guardrails and guidance and are looking to the government to create policies and standards that will help ensure trustworthy and transparent AI.

Helping users understand when and what AI is recommending, especially for high risk or consequential decisions, is critical to ensuring that end users have access to information about how AI-driven decisions are made.

Creating risk-based frameworks, pushing for commitments to ethical AI design and development, and convening multi-stakeholder groups are just a few key areas where policymakers must help lead the way.

It’s not just about asking more of AI. We need to ask more of each other — our governments, businesses, and civil society — to harness the power of AI in safe, responsible ways.

We don’t have all the answers, but we understand that leading with trust and transparency is the best path forward.

There are numerous ways that business and government can deepen trust in AI, providing us with the technical know-how and muscle memory to handle new risks as they emerge.

Protect people’s privacy

The AI revolution is a data revolution, and we need comprehensive privacy legislation to protect people’s data.

At Salesforce, we believe companies should not use any datasets that fail to respect privacy and consent.

By creating a separation of the data from the Large Language Model (LLM), organisations can be confident that their data is being protected from access via third parties without customer and user consent. When that data is accessed by the LLM, it’s important it is kept safe through a number of methods like secure data retrieval, dynamic grounding, data masking, toxicity detection, and zero retention.

When collecting data to train and evaluate models, it’s important to respect data provenance and ensure that companies have consent to use that data.

For governments, protecting their citizens while encouraging inclusive innovation means creating and giving access to privacy-preserving datasets that are specific to their countries and cultures.

Policy should address AI systems, not just models

A lot of attention is being paid to models, but to address high risk use cases we must take a holistic view: on data, models, and apps. Every entity in the AI value chain must play a role in responsible AI development and use.

A one-size-fits-all approach to regulation may hinder innovation, disrupt healthy competition, and delay the adoption of the technology that consumers and businesses around the world are already using to boost productivity.

Regulation should differentiate the context, control, and uses of the technology and assign guardrails accordingly. Generative AI developers, for instance, should be accountable for how the models are trained and the data they are trained on. At the same time, those deploying the technology and deciding how the tool is being used should establish rules governing that interaction.

When it comes to model sizes, bigger is not always better. Smaller models offer high quality responses and can be better for the planet. Governments should incentivise carbon footprint transparency and help scientists advance carbon efficiency for AI.

Appropriate guardrails will unlock innovation

Trust in AI is as important as functionality. Enterprises increasingly require on-demand availability, highly solid uptime, and reliable security. For example, when companies offer a service, customers expect it to be available most of the time. This powers trust.

Organisations need AI tools that are available, fault-tolerant, secure, and sustainable – this is ultimately how they build trust both within their organisation and with their customers.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

ad

You May Also Like

Politics

Senate has initiated steps to prohibit the use of foreign currencies for payments and transactions within the country. The proposed legislation, aimed at ensuring...

News

ESET, a global leader in digital security, has provided insights on the rising threat of online scams. In a significant operation earlier in the...

Tech

Mastercard Center for Inclusive Growth, MTN Group Fintech and Arifu have partnered to support about one million small businesses in Cote’ D’Ivoire and Uganda,...

News

Inspector-General of Police (IGP), Kayode Egbetokun, has issued a directive banning arbitrary arrests, harassment of youths, and the checking of mobile phones by police...