Tender

Safeguarded AI: TA1.4 Socio-technical Integration

ADVANCED RESEARCH AND INVENTION AGENCY

This public procurement record has 1 release in its history.

Tender

15 Oct 2024 at 11:42

Summary of the contracting process

The Advanced Research and Invention Agency (ARIA) has launched a competitive tender process for its project titled "Safeguarded AI: TA1.4 Socio-technical Integration." The procurement focuses on research services in the domain of socio-technical integration of advanced AI systems, under the industry category of services. Submissions are to be made electronically with a deadline of 02 January 2025, and the procurement method involves a competitive procedure with negotiation. The main location for this project is within the United Kingdom. The total funding available is £3.4 million, with the project running for 540 days, and may include additional funding or scope based on the outputs of awarded contracts.

This tender offers substantial opportunities for businesses, particularly those specialising in economic, social, legal, and political sciences. With a focus on creating AI systems that incorporate socio-technical integration and safeguard techniques, it is ideally suited for entities capable of addressing complex problems related to AI governance and safety. Businesses that can propose innovative solutions for deliberative processes, quantitative bargaining, and governability tools will find this tender highly beneficial for growth. It is an excellent opportunity for R&D creators who are keen to influence the future of AI technology and ensure its benefits are maximised while mitigating associated risks.

Find more tenders on our Open Data Platform.
How relevant is this notice?

D3 Tenders Premium

Win More Public Sector Contracts

AI-powered tender discovery, pipeline management, and market intelligence — everything you need to grow your public sector business.

Notice Title

Safeguarded AI: TA1.4 Socio-technical Integration

Notice Description

ARIA is an R&D funding agency built to unlock scientific and technological breakthroughs that benefit everyone. We empower scientists and engineers to pursue research at the edge of what is technologically or scientifically possible. We will reach across disciplines, sectors and institutions to shape, fund and manage projects across the R&D ecosystem, from startups to universities, to break down silos and discover new pathways. We are looking for proposals for our Safeguarded AI: TA1.4 Sociotechnical Integration For more info see here https://www.aria.org.uk/programme-safeguarded-ai/

Lot Information

Lot 1

Why this programme: as AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it's deployed wisely. Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can't be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they're considered impossible or impractical. What we're shooting for: by combining scientific world models and mathematical proofs we will aim to construct a 'gatekeeper', an AI system tasked with understanding and reducing the risks of other AI agents. In doing so we'll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation. Our goal: to usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks. The third solicitation for this programme is focused on TA1.4 Socio-technical Integration. Backed by PS3.4m, we're looking to support teams from the economic, social, legal and political sciences to consider the sound socio-technical integration of Safeguarded AI systems. This solicitation seeks R&D Creators - individuals and teams that ARIA will fund - to work on problems that are plausibly critical to ensuring that the technologies developed a part of the programme will be used in the best interest of humanity at large, and that they are designed in a way that enables their governability through representative processes of collective deliberation and decision-making. A few examples of the open problems we're looking for people to work on: - Qualitative deliberation facilitation: What tools or processes best enable representative input, collective deliberation and decision-making about safety specifications, acceptable risk thresholds, or success conditions for a given application domain? We hope to integrate these into the Safeguarded AI scaffolding. - Quantitative bargaining solutions: What social choice mechanisms or quantitative bargaining solutions could best navigate irreconcilable differences in stakeholders' goals, risk tolerances, and preferences, in order for Safeguarded AI systems to serve a multi-stakeholder notion of public good? - Governability tools for society: How can we ensure that Safeguarded AI systems are governed in societally beneficial and legitimate ways? - Governability tools for R&D organisations: Organisations developing Safeguarded AI capabilities have the potential to create significant externalities - both risks and benefits. What set of decision-making and governance mechanisms are best to ensure that entities developing or deploying Safeguarded AI capabilities have and maintain these externalities as appropriately major factors in their decision-making? We are also open to applications proposing other lines of work which illuminate critical socio-technical dimensions of Safeguarded AI systems, if they propose solutions to increase assurance that these systems will reliably be developed and deployed in service of humanity at large.

Options: Additional funding, scope and duration could be added to any contracts awarded.

Publication & Lifecycle

Open Contracting ID
ocds-h6vhtk-04ac25
Publication Source
Find A Tender Service
Latest Notice
https://www.find-tender.service.gov.uk/Notice/033130-2024
Current Stage
Tender
All Stages
Tender

Procurement Classification

Notice Type
Tender Notice
Procurement Type
Standard
Procurement Category
Services
Procurement Method
Selective
Procurement Method Details
Competitive procedure with negotiation
Tender Suitability
Not specified
Awardee Scale
Not specified

Common Procurement Vocabulary (CPV)

CPV Divisions

73 - Research and development services and related consultancy services


CPV Codes

73110000 - Research services

Notice Value(s)

Tender Value
Not specified
Lots Value
Not specified
Awards Value
Not specified
Contracts Value
Not specified

Notice Dates

Publication Date
15 Oct 20241 years ago
Submission Deadline
2 Jan 2025Expired
Future Notice Date
Not specified
Award Date
Not specified
Contract Period
Not specified - Not specified
Recurrence
Not specified

Notice Status

Tender Status
Active
Lots Status
Active
Awards Status
Not Specified
Contracts Status
Not Specified

Contracting Authority (Buyer)

Main Buyer
ADVANCED RESEARCH AND INVENTION AGENCY
Contact Name
Not specified
Contact Email
clarifications@aria.org.uk
Contact Phone
Not specified

Buyer Location

Locality
LONDON
Postcode
NW1 2DB
Post Town
North West London
Country
England

Major Region (ITL 1)
TLI London
Basic Region (ITL 2)
TLI3 Inner London - West
Small Region (ITL 3)
TLI36 Camden
Delivery Location
Not specified

Local Authority
Camden
Electoral Ward
St Pancras & Somers Town
Westminster Constituency
Holborn and St Pancras

Open Contracting Data Standard (OCDS)

View full OCDS Record for this contracting process

Download

The Open Contracting Data Standard (OCDS) is a framework designed to increase transparency and access to public procurement data in the public sector. It is widely used by governments and organisations worldwide to report on procurement processes and contracts.

{
    "tag": [
        "compiled"
    ],
    "id": "ocds-h6vhtk-04ac25-2024-10-15T12:42:15+01:00",
    "date": "2024-10-15T12:42:15+01:00",
    "ocid": "ocds-h6vhtk-04ac25",
    "description": "Detailed timelines can be found in the programme call information on ARIAs website: https://www.aria.org.uk/programme-safeguarded-ai/ The deadline for submission of a full proposal is 02 January 2025 (12:00 GMT). The total funding value is the estimated budget available. We expect to fund multiple applicants. Funding is anticipated to be award via both contracts and grants. For information on how we fund https://www.aria.org.uk/faqs-funding/",
    "initiationType": "tender",
    "tender": {
        "id": "ocds-h6vhtk-04ac25",
        "legalBasis": {
            "id": "32014L0024",
            "scheme": "CELEX"
        },
        "title": "Safeguarded AI: TA1.4 Socio-technical Integration",
        "status": "active",
        "classification": {
            "scheme": "CPV",
            "id": "73110000",
            "description": "Research services"
        },
        "mainProcurementCategory": "services",
        "description": "ARIA is an R&D funding agency built to unlock scientific and technological breakthroughs that benefit everyone. We empower scientists and engineers to pursue research at the edge of what is technologically or scientifically possible. We will reach across disciplines, sectors and institutions to shape, fund and manage projects across the R&D ecosystem, from startups to universities, to break down silos and discover new pathways. We are looking for proposals for our Safeguarded AI: TA1.4 Sociotechnical Integration For more info see here https://www.aria.org.uk/programme-safeguarded-ai/",
        "lots": [
            {
                "id": "1",
                "description": "Why this programme: as AI becomes more capable, it has the potential to power scientific breakthroughs, enhance global prosperity, and safeguard us from disasters. But only if it's deployed wisely. Current techniques working to mitigate the risk of advanced AI systems have serious limitations, and can't be relied upon empirically to ensure safety. To date, very little R&D effort has gone into approaches that provide quantitative safety guarantees for AI systems, because they're considered impossible or impractical. What we're shooting for: by combining scientific world models and mathematical proofs we will aim to construct a 'gatekeeper', an AI system tasked with understanding and reducing the risks of other AI agents. In doing so we'll develop quantitative safety guarantees for AI in the way we have come to expect for nuclear power and passenger aviation. Our goal: to usher in a new era for AI safety, allowing us to unlock the full economic and social benefits of advanced AI systems while minimising risks. The third solicitation for this programme is focused on TA1.4 Socio-technical Integration. Backed by PS3.4m, we're looking to support teams from the economic, social, legal and political sciences to consider the sound socio-technical integration of Safeguarded AI systems. This solicitation seeks R&D Creators - individuals and teams that ARIA will fund - to work on problems that are plausibly critical to ensuring that the technologies developed a part of the programme will be used in the best interest of humanity at large, and that they are designed in a way that enables their governability through representative processes of collective deliberation and decision-making. A few examples of the open problems we're looking for people to work on: - Qualitative deliberation facilitation: What tools or processes best enable representative input, collective deliberation and decision-making about safety specifications, acceptable risk thresholds, or success conditions for a given application domain? We hope to integrate these into the Safeguarded AI scaffolding. - Quantitative bargaining solutions: What social choice mechanisms or quantitative bargaining solutions could best navigate irreconcilable differences in stakeholders' goals, risk tolerances, and preferences, in order for Safeguarded AI systems to serve a multi-stakeholder notion of public good? - Governability tools for society: How can we ensure that Safeguarded AI systems are governed in societally beneficial and legitimate ways? - Governability tools for R&D organisations: Organisations developing Safeguarded AI capabilities have the potential to create significant externalities - both risks and benefits. What set of decision-making and governance mechanisms are best to ensure that entities developing or deploying Safeguarded AI capabilities have and maintain these externalities as appropriately major factors in their decision-making? We are also open to applications proposing other lines of work which illuminate critical socio-technical dimensions of Safeguarded AI systems, if they propose solutions to increase assurance that these systems will reliably be developed and deployed in service of humanity at large.",
                "contractPeriod": {
                    "durationInDays": 540
                },
                "hasRenewal": false,
                "submissionTerms": {
                    "variantPolicy": "notAllowed"
                },
                "hasOptions": true,
                "options": {
                    "description": "Additional funding, scope and duration could be added to any contracts awarded."
                },
                "status": "active"
            }
        ],
        "items": [
            {
                "id": "1",
                "deliveryAddresses": [
                    {
                        "region": "UK"
                    }
                ],
                "relatedLot": "1"
            }
        ],
        "submissionMethod": [
            "electronicSubmission"
        ],
        "submissionMethodDetails": "https://www.aria.org.uk/programme-safeguarded-ai/",
        "procurementMethod": "selective",
        "procurementMethodDetails": "Competitive procedure with negotiation",
        "tenderPeriod": {
            "endDate": "2025-01-02T12:00:00Z"
        },
        "submissionTerms": {
            "languages": [
                "en"
            ]
        },
        "hasRecurrence": false
    },
    "parties": [
        {
            "id": "GB-FTS-79144",
            "name": "ADVANCED RESEARCH AND INVENTION AGENCY",
            "identifier": {
                "legalName": "ADVANCED RESEARCH AND INVENTION AGENCY",
                "noIdentifierRationale": "notOnAnyRegister"
            },
            "address": {
                "streetAddress": "96 EUSTON ROAD,",
                "locality": "LONDON",
                "region": "UKI31",
                "postalCode": "NW12DB",
                "countryName": "United Kingdom"
            },
            "contactPoint": {
                "email": "clarifications@aria.org.uk",
                "url": "https://www.aria.org.uk/programme-safeguarded-ai/"
            },
            "roles": [
                "buyer"
            ],
            "details": {
                "url": "https://www.aria.org.uk",
                "classifications": [
                    {
                        "scheme": "TED_CA_TYPE",
                        "id": "BODY_PUBLIC",
                        "description": "Body governed by public law"
                    },
                    {
                        "scheme": "COFOG",
                        "id": "01",
                        "description": "General public services"
                    }
                ]
            }
        },
        {
            "id": "GB-FTS-127057",
            "name": "See the ARIA Act 2022",
            "identifier": {
                "legalName": "See the ARIA Act 2022"
            },
            "address": {
                "locality": "London",
                "countryName": "United Kingdom"
            },
            "roles": [
                "reviewBody"
            ]
        }
    ],
    "buyer": {
        "id": "GB-FTS-79144",
        "name": "ADVANCED RESEARCH AND INVENTION AGENCY"
    },
    "language": "en"
}