Guiding the Development of AI Systems
Guiding the Development of AI Systems Artificial Intelligence (AI) continues to transform how we live, work, interact and access services. There is potential for it to bring multiple benefits to the global population. However, this is balanced by significant concerns that AI systems could turn into a sci-fi nightmare. So, how can AI development be […]
Guiding the Development of AI Systems
Artificial Intelligence (AI) continues to transform how we live, work, interact and access services. There is potential for it to bring multiple benefits to the global population. However, this is balanced by significant concerns that AI systems could turn into a sci-fi nightmare. So, how can AI development be controlled to maximise solutions and minimise problems?
The Role of AI in Business
AI is already embedded in many businesses, from chatbots and e-commerce recommendations, to blog writing and fulfilment. Many more applications are in development and the possibilities seem endless. Through robotics, machine learning, data analysis and natural language processing, AI offers the opportunity to drive productivity, efficiency, innovation and economic growth.
The Risk of AI in Business
AI could provide the solution to many of the world’s problems. However, it could equally be the cause of many more. There are serious concerns about the potential of AI being corrupted, of bias and misinformation being used for harm, rather than good. There’s the risk of cyber security, intellectual property and data privacy being compromised. In addition, there is the real possibility that humans could lose control of AI.
So how does the world manage these risks?
AI Safety Summit
At the start of November, international Governments, leading AI companies, research experts and civil society groups from 28 countries met at Bletchley Park in Buckinghamshire. This was the starting point of sustained cooperation to ensure the secure development of AI systems.
The aim was to achieve a common understanding of the cyber risks associated with AI, along with strategies to mitigate these risks. From this point, work could begin on agreeing on ways to safely realise the opportunities of AI technologies.
To put the UK at the forefront of AI Safety, the launch of the AI Safety Institute was announced at the summit. Informed by a specialist global task force, this hub will lead research, advice and testing on emerging AI technologies. The hub has been granted access to ‘supercomputers’ Isambard AI in Bristol and Dawn in Cambridge to support research.
Future summits are planned in the Republic of Korea and then France, to keep AI development high on the agenda. As IT Solution specialists, Flex IT is curious to see how this develops and the implications for wider cyber security measures.
Global Guidelines on AI Security
Following the AI Safety Summit, GCHQ’s National Cyber Security Centre and the US Cybersecurity Infrastructure Security Agency have driven actions on global guidelines. Working in partnership with businesses, these aim to put security and the heart of AI innovation.
The guidelines are divided into four sections:
- Secure Design
- Secure Development
- Secure Deployment
- Secure Operations & Maintenance
By following the guidelines, developers and adopters can ensure that cyber security is a precondition of AI system safety, not an afterthought. Collaboration also aids system compatibility.
At the end of November, the guidelines were collectively endorsed and co-sealed by 18 countries. This includes all G7 nations. However, it is notable that China, Russia and North Korea are not represented. For now, let’s bring attention a little closer to home.
AI in Oxfordshire
Oxfordshire has been a pioneer in the early development of AI. With facilities including the Oxford Robotics Institute, Remote Applications in Challenging Environments (RACE) and Harwell Campus, this county is a major contributor to innovation.
Oxfordshire’s reputation for technological research and development has attracted many startups and established companies. As such, the new guidelines will be essential reading material for businesses on our doorstep.
Is your company one of them? If so, it’s time to get familiar with the Guidelines on Secure AI System Development. The guidelines apply if you are developing from scratch or building on existing tools or services.
Using Chatbots & More
If your Oxfordshire business has questions about cyber security and AI integration, contact us. We’d love to help. If we don’t have the answer, we’ll know someone who does!
Flex IT Ltd
0333 101 7300
enquiries@flex.co.uk
Read More about Robotics and AI at the OxLEP site here –
More in IT Services
Your Business Without IT
You are invited to take a moment to visualise what would happen if your IT infrastructure went down, your customer and supplier data was stolen and your intellectual property was compromised. With denied access to your systems and information, operations will come to an immediate standstill. Financial demands are made and there’s a risk that […]
Penningtons Manches Cooper seek Cyber Security Firm of the Year
We’re delighted to sponsor the Cyber Security Firm of the Year category at The Business Magazine’s Thames Valley Tech Awards.
Do SMEs Need Cyber Security?
Recently the Panasonic Corporation reported a major data breach. Hackers had successfully gained access to the Japanese company’s internal network. On top of this the European Medicines Agency has reported that some of the data on the Pfizer/BioNTech COVID-19 vaccine that was stolen during a cyber-attack in early December 2020 was released online illegally shortly after the attack.