security

Tech entrepreneur Ian Hogarth to lead UK's AI Foundation Model … – GOV.UK


  • Leading tech entrepreneur and renowned investor and AI specialist will chair the Foundation Model Taskforce.
  • The Taskforce will lead vital AI safety research as part of driving forward the safe and reliable development of Foundation Models while seizing the extraordinary opportunities they present.
  • Modelled on the success of the Vaccine Taskforce, operating with the same agility and delegated authority, is backed with an initial £100 million of government funding.

The renowned tech investor, entrepreneur and AI specialist Ian Hogarth has been announced as the chair of the Government’s Foundation Model Taskforce, reporting directly to the Prime Minister and Technology Secretary.

A leading authority on AI, Ian has co-authored the annual State of AI report since 2018 on the progress of AI. Ian is also a visiting professor at University College London and he has a strong background in tech entrepreneurship as the founder of the start-up Songkick and the venture capital fund Plural.

The appointment brings a wealth of experience to developing this technology responsibly, which underpins the government’s AI strategy and follows the launch of the AI White Paper. Ian’s strong commercial experience and connections across the AI sector equip him with valuable insights that he will bring to this role.

Under Ian’s leadership, a key focus for the Taskforce in the coming months will be taking forward cutting-edge safety research in the run up to the first global summit on AI safety to be hosted in the UK later this year.

Bringing together expertise from government, industry and academia, the Taskforce will look at the risks surrounding AI. It will carry out research on AI safety and inform broader work on the development of international guardrails, such as shared safety and security standards and infrastructure, that could be put in place to address the risks.

Readers Also Like:  Chinese Hackers Using Never-Before-Seen Tactics for Critical ... - The Hacker News

The Taskforce will be modelled on the successes of the Vaccine Taskforce, operating with the same agility and delegated authority so the Chair and Taskforce are empowered to take forward work and make decisions at pace.

Ian Hogarth, Chair of the Foundation Model Taskforce, said:

UK scientists and entrepreneurs have made many important contributions to the field of AI, from Alan Turing through to AlphaFold. The Prime Minister has laid out a bold vision for the UK to supercharge the field of AI safety, one that until now has been under-resourced even as AI capabilities have accelerated. I’m honoured to have the chance to chair such an important mission in the lead up to the first global summit on AI Safety in the UK.

Prime Minister Rishi Sunak said:

The more artificial intelligence progresses, the greater the opportunities are to grow our economy and deliver better public services.

But with such potential to transform our future, we owe it to our children and our grandchildren to ensure AI develops safely and responsibly.

As one of the leading figures in UK tech, it’s great to have Ian leading our expert taskforce, empowered with authority and agility to build our leadership in AI safety and development.

It will ensure we do things differently and move with the same pace and vigour as we rise to meet the task ahead.

In April, the government committed an initial £100 million to set up the Foundation Model Taskforce, seizing the extraordinary opportunities presented by cutting-edge AI systems and advancing their safety and reliability.

Readers Also Like:  Base’s Friend.tech Drives Record Adoption Amid Rising Security Risks - CoinTrust

Foundation models, including the large language models that power popular new services like ChatGPT, Google Bard and Claude, are general-purpose AI systems trained on massive data sets which can be applied to tasks across the economy.

Technology Secretary Chloe Smith said:

Our Foundation Model Taskforce will steer the responsible and ethical development of cutting-edge AI solutions, and ensure that the UK is right at the forefront when it comes to using this transformative technology to deliver growth and future-proof our economy.

With Ian on board, the Taskforce will be perfectly placed to strengthen the UK’s leadership on AI, and ensure that British people and businesses have access to the trustworthy tools they need to benefit from the many opportunities artificial intelligence has to offer.

The expert Taskforce will help build UK capabilities in foundation models and leverage our existing strengths, including UK leadership in AI safety, research and development, to identify and tackle the unique safety challenges presented by this type of AI.

The work of the Taskforce will be vital in seizing the opportunities of AI and building public confidence in its use, complementing work already taking place in AI companies themselves, who are looking at their own measures to ensure development of AI is safe and responsible. They have also committed to give the Taskforce early or priority access to models for research and safety purposes to help build better evaluations and help us better understand the opportunities and risks of these systems.

The global AI safety summit later this year will be an opportunity for leading nations, industry and academia to come together and explore an assessment of the risks, to scope collective research possibilities and to work towards shared safety and security standards and infrastructure.

Readers Also Like:  Software vulnerabilities are on the decline, but that's no reason to relax - TechRadar

As President Biden has noted, the UK is well-placed to lead the international effort on AI safety.

Notes to editors:

  • Ian Hogarth and DSIT have responsibility to identify and address any actual, potential or perceived personal or business interests which may conflict, or may be perceived to conflict, with the Chair’s public duties. Ian has agreed to a series of mitigations to manage potential conflicts of interest. This agreement includes, for example, divestments of personal holdings in companies building foundation models or foundation model safety tools. Mitigations are being put in place to address each of the potential conflicts with effect from the start of his role.
  • Regular checks and controls will also be in place to respond to the dynamic development of the Taskforce’s scope and work programme.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.