Protected and reliable AI is a shared duty
In an period the place synthetic intelligence (AI) is quickly reworking business and society, collaboration between the private and non-private sectors has by no means been extra essential. Belief and security are finally on the road.
Cisco is a proud signatory and supporter of the EU AI Pact, outlining shared commitments round implementing acceptable governance, mapping group’s high-risk use circumstances, and selling AI literacy and security for staff. Every of those measures performs an necessary function in fostering innovation whereas mitigating danger. In addition they align carefully with Cisco’s longstanding method to accountable enterprise practices.
Advancing our method to AI Governance
In 2018, Cisco printed its dedication to proactively respect human rights within the design, growth, and use of AI. We formalized this dedication in 2022 with Cisco’s Accountable AI Ideas. We operationalize these ideas by our Accountable AI Framework. And in 2023, as the usage of generative AI grew to become extra prolific, we used our Ideas and Framework as a basis to construct a strong AI Affect Evaluation course of to evaluate potential AI use circumstances, to be used in each product growth and inside operations.
Cisco is an energetic participant within the growth of frameworks and requirements world wide, and in flip, we proceed to refine and adapt our method to governance. Cisco’s CEO Chuck Robbins signed the Rome Name for AI Ethics, confirming our dedication to the ideas of transparency, inclusion, accountability, impartiality, reliability, safety and privateness. We have now additionally carefully adopted the G7 Hiroshima Course of and align with the Worldwide Guiding Ideas for Superior AI Techniques. Europe is a primary mover in AI regulation addressing dangers to elementary rights and security by the EU AI Act and we welcome the chance to hitch the AI Pact as a primary step in its implementation.
Understanding and mitigating excessive danger use circumstances
Cisco absolutely helps a risk-based method to AI governance. As organizations start to develop and deploy AI throughout their merchandise and methods, it’s essential to map the potential makes use of and mitigation approaches.
At Cisco, this necessary step is enabled by our AI Affect Evaluation course of. These analyses take a look at numerous features of AI and product growth, together with underlying fashions, use of third-party applied sciences and distributors, coaching information, high quality tuning, prompts, privateness practices, and testing methodologies. The final word aim is to establish, perceive and mitigate any points associated to Cisco’s Accountable AI Ideas – transparency, equity, accountability, reliability, safety and privateness.
Investing in AI literacy and the workforce of the long run
We all know AI is altering the best way work will get carried out. In flip, organizations have a chance and duty to assist staff construct the talents and capabilities obligatory to achieve the AI period. At Cisco, we’re taking a multi-pronged method. We have now developed obligatory coaching on secure and reliable AI use for international staff and have developed a number of AI studying pathways for our groups, relying on their skillset and business.
However we need to assume past our personal workforce. Via the Cisco Networking Academy, we have now dedicated to coach 25 million individuals the world over in digital abilities, together with AI, by 2032. We’re additionally main the work with the AI-Enabled ICT Workforce Consortium, in partnership with our business friends, to offer organizations with data across the impression of AI on the workforce and equip staff with related abilities.
Waiting for the long run
We’re nonetheless within the early days of AI. And whereas there are numerous unknowns, one factor stays clear. Our capacity to construct an inclusive future for all will rely upon a shared dedication round secure and reliable AI throughout the private and non-private sectors. Cisco is proud to hitch the AI Pact and proceed to show our robust dedication to Accountable AI globally.
Share: