November 1, 2023 | 4 Min

The Amplience Commitment to The White House Executive Order on AI

Andrew Boulton
AuthorAndrew Boulton
Default hero

No doubt you’ve seen the Executive Order that came from the White House around the safe, and trustworthy use of Artificial Intelligence. Whether you’re developing AI software or consuming it, it’s something you must pay attention to.

AI is on everyone’s mind right now – and not just at a business or cultural level, but at the highest levels of governments and CEOs all over the world. There is certain to be a lot of discussion, debate and legislative consideration around the responsible and appropriate use of AI, and we wanted to share our take on this development, hopefully helping you understand our commitment to efficacy and what this means for your own AI use and planning.

What does this mean for AI?

To start, we should be clear that this order relates mostly to responsible use by US government agencies (many other countries already have or are developing their own similar policies). But, interestingly, the order also sees the Biden administration taking a global lead to ensure AI doesn’t become a threat to national security, as well as considering the ways in which it can be of benefit to society.

So, as an AI Content Company, how do we at Amplience feel about an order that effectively demands safe, secure and trustworthy AI? We embrace it, as any responsible AI organization — commercial or governmental — should.

AI is here to stay, revolutionizing industries and changing the way we work and think about our future. No matter what your personal views, hopes or fears may be, it will play a role in our everyday lives. This order from the White House acknowledges the enormous promise of AI and its potential to create positive change in the world. But, quite rightly, it also points out there needs to be clear and careful human judgement to ensure that AI is used to deliver the great benefits it promises, while mitigating against misuse or harm.

High standards for AI

The set of rigorous standards to be determined by NIST (National Institute of Standards and Technology) for the testing of AI systems to ensure they’re safe, secure and trustworthy is a step in the field of AI auditing and safety that we applaud.  The software industry is buzzing with the release of generic, uncontrolled AI features and capabilities that will likely soon be deemed non-compliant in the face of new legislation. Investing into context and compliance is just as important (if not more so) than innovation. Brands and retailers should demand these assurances from their software vendors — and yes, that includes us. While safety, security and trustworthiness are clear priorities, we have additional criteria for determining when it is appropriate to invest in or implement AI – we call it ACE, which stands for the Applicability, Contextualization and Efficacy of AI. Get in touch with us to learn more, we’d be happy to share our vision, roadmap and commitment to compliance with you.

Legislation like this shouldn’t be a barrier, nor a concern to any responsible company operating in this field. If it causes alarm or concern to an organization, that’s a strong indication that they shouldn’t be developing AI software in the first place.

Our take is simply this – AI and people are better together. Any directive that mitigates the potential misuses of AI, while also maximizing the benefits for everyone using it is to be applauded (and cheered, and whooped, and patted on the back). If anything, this kind of government-level thinking needs to go much further – promoting an understanding and adoption of relevant AI in education and the wider public sphere.

This White House order was, quite rightly, big news in the AI world. It encourages open dialogue about the democratization of AI and how everyone can benefit in meaningful ways, without bias or discrimination. And, as a global company that’s committed to the revolutionary yet responsible use of AI, that fills us with a great deal of hope.