OpenAI:如何设计 AGI 时代的产业政策(全文翻译)

AITNT-国内领先的一站式人工智能新闻资讯网站
# 热门搜索 #
OpenAI:如何设计 AGI 时代的产业政策(全文翻译)
10027点击    2026-04-14 14:56

BLOG


本文翻译自 OpenAI 刚刚发布的政策白皮书「Industrial Policy for the Intelligence Age: Ideas to Keep People First」,共 13 页,以下为逐段中英对照翻译


OpenAI 刚刚发了一份 13 页的政策文件,探讨 AGI 时代的产业政策应该如何设立,这份文件有几个值得注意的地方:


第一,OpenAI 在正式的政策文件里承认了「经济收益可能集中在少数公司(包括 OpenAI 自己)」


第二,它提出了一系列具体方案,包括公共财富基金、四天工作制、AI 接入权、自适应安全网等


第三,安全和治理部分同样值得注意:模型遏制手册(危险模型释放后怎么办)、事件报告机制(类似航空业的 near-miss 报告)、以及要求前沿 AI 公司采用使命对齐的公司治理结构


第四,这是 Sam Altman 那篇「Intelligence Age」博客的政策落地版本,从愿景走到了操作层面


不管你怎么看 OpenAI 这家公司,这份文件本身值得从业者读一遍。翻译出来供参考


原文链接

https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial%%20Policy%%20for%%20the%%20Intelligence%%20Age.pdf


OpenAI:如何设计 AGI 时代的产业政策(全文翻译)

Industrial Policy for the Intelligence Age, OpenAI, April 2026


开场白


The drive to understand has always powered human progress—creating a flywheel from science to technology, from technology to discovery, and from discovery onward to more science. That inexorable forward movement led us to melt sand, add impurities, structure it with atomic precision into computer chips, run energy through those chips, and build systems capable of creating increasingly powerful artificial intelligence.


理解世界的驱动力一直推动着人类进步,形成了一个飞轮:从科学到技术,从技术到发现,从发现再到更多的科学。这种不可阻挡的前进力量让我们熔化沙子、掺入杂质、以原子级精度将其结构化为芯片、给芯片通电,最终构建出能创造越来越强大的人工智能的系统


In just a few years, AI has progressed from systems capable of fast, narrow tasks to models that can perform general tasks people used to need hours to do. Now, we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI. No one knows exactly how this transition will unfold. At OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want, and prepare for a range of possible outcomes while building the capacity to adapt. That’s what this document is for—to start a conversation about governing advanced AI in ways that keep people first.


短短几年内,AI 从只能做快速窄域任务的系统,进化到能完成人类需要几小时才能做完的通用任务。现在,我们正在开始向超级智能过渡:能够超越最聪明的人类(即使这些人也在用 AI 辅助)的 AI 系统。没人确切知道这个过渡会如何展开。在 OpenAI,我们认为应该通过民主程序来引导它,让人们有真正的权力去塑造他们想要的 AI 未来,同时为各种可能的结果做准备。这份文件就是为此而生的:开启一场关于如何以人为本治理高级 AI 的对话


The promise of superintelligence is extraordinary. Just as electricity transformed homes, the combustion engine remade mobility, and mass production lowered the cost of essential goods, superintelligence will speed up scientific and medical breakthroughs, significantly increase productivity, lower costs for families by making essential goods cheaper, and open the way for entirely new forms of work, creativity, and entrepreneurship.


超级智能的前景是惊人的。正如电力改变了家庭、内燃机重塑了出行、大规模生产降低了基本商品成本,超级智能将加速科学和医学突破,大幅提高生产力,通过降低基本商品价格来减轻家庭负担,并为全新形式的工作、创造力和创业开辟道路


Today, AI’s impact on work is often measured by the time required for tasks that systems can reliably complete. Frontier systems have advanced from supporting tasks that take people minutes to complete, to tasks that take them hours to complete. If progress continues, we can expect systems to be capable of carrying out projects that currently take people months. This shift will reshape how organizations run, how knowledge is created, and how people find meaning and opportunity. It will also highlight the limitations of today’s policy toolkit and the need for more ambitious ideas to keep people at the center of the transition to superintelligence.


今天,AI 对工作的影响通常用系统能可靠完成的任务所需时间来衡量。前沿系统已经从辅助人类几分钟能完成的任务,进展到辅助需要几小时的任务。如果进展持续,我们可以预期系统将能执行目前需要人类几个月的项目。这种转变将重塑组织运行方式、知识创造方式,以及人们寻找意义和机会的方式。它也将暴露当前政策工具包的局限性,以及在向超级智能过渡中保持以人为本所需要的更大胆的想法


While we strongly believe that AI’s benefits will far outweigh its challenges, we are clear-eyed about the risks—of jobs and entire industries being disrupted; bad actors misusing the technology; misaligned systems evading human control; governments or institutions deploying AI in ways that undermine democratic values; and power and wealth becoming more concentrated instead of more widely shared.


虽然我们坚信 AI 的好处将远超其挑战,但我们对风险保持清醒认知:工作岗位和整个行业被颠覆,恶意行为者滥用技术,失调的系统逃脱人类控制,政府或机构以破坏民主价值的方式部署 AI,以及权力和财富更加集中而非更广泛地共享


Indeed, we highlight these risks here to raise awareness of the need for policy solutions to address them. Unless policy keeps pace with technological change, the institutions and safety nets needed to navigate this transition could fall behind. Ensuring that AI expands access, agency, and opportunity is a central challenge as we move towards superintelligence. We should aim for a future where superintelligence benefits everyone, and where we:


事实上,我们在这里强调这些风险,正是为了提高对政策解决方案需求的认识。除非政策跟上技术变革的步伐,否则引导这一转型所需的制度和安全网可能会落后。确保 AI 扩大人们获取资源、自主行动和抓住机会的能力,是向超级智能迈进过程中的核心挑战。我们应该追求一个超级智能惠及所有人的未来,在这个未来中:


1. 广泛分享繁荣


Share prosperity broadly. The promise of advanced AI is not just technological progress, but a higher quality of life for all. Everyone should have the opportunity to participate in the new opportunities AI creates. Living standards should rise and people should see material improvements through lower costs, better health and education, and more security and opportunity. If AI winds up controlled by, and benefiting only a few, while most people lack agency and access to AI-driven opportunity, we will have failed to deliver on its promise.


高级 AI 的承诺不仅是技术进步,而是所有人生活质量的提高。每个人都应有机会参与 AI 创造的新机遇。生活水平应该提升,人们应该通过更低的成本、更好的医疗和教育、更多的安全感和机会看到实质性的改善。如果 AI 最终被少数人控制和独享,而大多数人缺乏自主权和获取 AI 驱动机遇的途径,那我们就辜负了它的承诺


2. 降低风险


Mitigate risks. The transition toward superintelligence will come with serious risks—from economic disruption, to misuse in areas like cybersecurity and biology, to the loss of alignment or control over increasingly powerful systems. Without effective mitigation, people will be harmed. Avoiding these outcomes requires building new institutions, technical safeguards, and governance frameworks so that advanced systems remain safe, controllable, and aligned—reducing the risk of large-scale harm, protecting critical systems, and ensuring people can rely on AI in their daily lives. As capability scales, safety must scale with it.


向超级智能的过渡将伴随严重风险:经济动荡,网络安全和生物领域的滥用,以及对越来越强大的系统失去对齐或控制。没有有效的缓解措施,人们将受到伤害。避免这些后果需要建立新的制度、技术保障和治理框架,确保高级系统安全、可控、对齐,从而减少大规模伤害的风险、保护关键系统,并确保人们能在日常生活中依赖 AI。能力扩展的同时,安全也必须同步扩展


3. 民主化 AI 的获取和自主权


Democratize access and agency. As capabilities advance, some systems may need to be controlled for safety. But broad participation in the AI economy should not depend on access to the most powerful models—it should depend on access to AI that is useful, affordable, preserves people’s privacy and expands their individual agency. Avoiding a concentration of wealth and control will require ensuring that people everywhere can use AI in ways that give them real influence at work, in markets, and through democratic processes.


随着能力提升,某些系统可能需要出于安全考虑而受到控制。但广泛参与 AI 经济不应取决于能否使用最强大的模型,而应取决于能否使用有用的、负担得起的、保护隐私并扩展个人自主权的 AI。避免财富和控制权的集中,需要确保各地的人们都能以赋予他们在工作、市场和民主程序中真正影响力的方式使用 AI


为什么需要新的产业政策


The Case for a New Industrial Policy. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education.


社会过去也经历过重大技术转型,但过程中总伴随着真实的颠覆和错位。虽然这些转型最终创造了更多繁荣,但它们需要积极主动的政治选择,才能确保增长转化为更广泛的机会和更大的安全感。比如,工业时代转型之后,进步时代和罗斯福新政帮助更新了社会契约,以适应被电力、内燃机和大规模生产重塑的世界。它们通过建立新的公共机构、保护措施和对公平经济应提供什么的期望来实现这一点,包括劳动保护、安全标准、社会安全网和扩大教育机会


History shows that democratic societies can respond to technological upheaval with ambition: reimagining the social contract, mediating between capital and labor, and encouraging broad distribution of the benefits of technological progress while preserving pluralism, constitutional checks and balances, and freedom to innovate. The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone.


历史表明,民主社会能够以雄心壮志回应技术剧变:重新构想社会契约,在资本与劳动之间调和,鼓励技术进步收益的广泛分配,同时保持多元主义、宪政制衡和创新自由。向超级智能的过渡将需要一种更加雄心勃勃的产业政策,一种反映民主社会集体行动能力的政策,以塑造其经济未来,让超级智能惠及所有人


On this path to superintelligence, there are clear steps we need to take today. People are already concerned about what AI will mean for their lives—whether their jobs and families will be safe, and whether data centers will disrupt their communities and raise energy prices. AI data centers should pay their own way on energy so that households aren’t subsidizing them; and they should generate local jobs and tax revenue. Governments should implement common-sense AI regulation—not to entrench incumbents through regulatory capture but to protect children, mitigate national security risks, and encourage innovation.


在通往超级智能的路上,有一些明确的步骤需要今天就采取。人们已经在担心 AI 对他们生活的影响:工作和家庭是否安全,数据中心是否会扰乱社区并推高能源价格。AI 数据中心应该自己承担能源成本,而不是让家庭来补贴;它们应该创造本地就业和税收。政府应该实施常识性的 AI 监管,目的不是通过监管捕获来巩固现有企业,而是保护儿童、缓解国家安全风险、鼓励创新


But the magnitude of the changes we expect and the potential risks we foresee demand even more. We are entering a new phase of economic and social organization that will fundamentally reshape work, knowledge, and production. It requires not just incremental policy responses but ambitious policy ideas for tomorrow that we must start discussing today. This is the moment to start the conversation: to think boldly, explore new ideas, and collaboratively develop a new industrial policy agenda that ensures superintelligence benefits everyone.


但我们预期的变化规模和预见的潜在风险要求更多。我们正在进入一个经济和社会组织的新阶段,它将从根本上重塑工作、知识和生产。这不仅需要渐进式的政策回应,更需要面向未来的大胆政策构想,而这些构想必须从今天就开始讨论。现在是开启对话的时刻:大胆思考,探索新想法,合作制定一个确保超级智能惠及所有人的新产业政策议程


In normal times, the case for letting markets work on their own is strong. Historically, competition, entrepreneurship, and open economic participation have lifted living standards and expanded opportunity. Capitalism, imperfect as it is, remains an effective system for translating human ingenuity into shared prosperity.


在正常时期,让市场自行运作的理由是充分的。历史上,竞争、创业和开放的经济参与提升了生活水平、扩大了机会。资本主义虽不完美,但仍然是一个将人类创造力转化为共享繁荣的有效体系


But industrial policy can play an important role when market forces alone aren’t sufficient—when new technologies create opportunities and risks that existing institutions aren’t equipped to manage. It can help translate scientific breakthroughs into scaled industries and broad-based economic growth.


但当市场力量本身不足时,产业政策可以发挥重要作用:当新技术创造了现有制度无法管理的机遇和风险时。它可以帮助将科学突破转化为规模化产业和广泛的经济增长


A new industrial policy agenda should use government’s existing toolbox for aligning public and private activities: research funding, workforce development, market-shaping tools, and targeted regulation. But governments should not act alone. Nongovernmental institutions should pilot new approaches, measure what works, and iterate quickly, then governments should reinforce successes by aligning incentives and scaling what works through procurement, regulation, and investment. This public-private collaboration should stave off regulatory capture and centralized control, instead preserving the freedom to innovate while ensuring that the onset of superintelligence isn’t dominated by the most powerful forces in society.


新的产业政策议程应该利用政府现有的工具箱来协调公共和私人活动:研究资金、劳动力发展、市场塑造工具和有针对性的监管。但政府不应单独行动。非政府机构应该试点新方法,衡量什么有效,快速迭代,然后政府通过采购、监管和投资来强化成功案例。这种公私合作应该避免监管捕获和集中控制,保留创新自由,同时确保超级智能的到来不被社会中最强大的力量所主导


We don’t have all, or even most of the answers. Different paths will require different policy responses, and no single set of tools will be enough in any scenario. But we should aim to build an AI economy that is both open and resilient through policies that expand participation, broaden access to opportunity, and ensure that society has the safeguards and institutions needed to manage risk.


我们没有全部答案,甚至没有大部分答案。不同的路径需要不同的政策回应,没有任何一套工具在所有情景下都够用。但我们应该致力于建设一个既开放又有韧性的 AI 经济,通过扩大参与、拓宽机会获取、确保社会拥有管理风险所需的保障和制度来实现


This document offers initial ideas for an industrial policy agenda to keep people first during the transition to superintelligence. It is organized in two sections: 1) building an open economy with broad access, participation, and shared prosperity; and 2) building a resilient society through accountability, alignment, and management of frontier risks. OpenAI is offering these ideas to help start a broader conversation about the kinds of policies and institutions needed to navigate the transition. These ideas are intentionally early and exploratory, offered not as a comprehensive or final set of recommendations, but as a starting point for discussion that we invite others to build on, refine, challenge, or choose among through the democratic process.


这份文件提供了一些初步想法,为在向超级智能过渡期间以人为本的产业政策议程。它分为两部分:1)建设一个具有广泛准入、参与和共享繁荣的开放经济;2)通过问责、对齐和前沿风险管理来建设一个有韧性的社会。OpenAI 提出这些想法是为了启动一场更广泛的对话。这些想法是刻意早期和探索性的,不是作为全面或最终的建议,而是作为讨论的起点,我们邀请他人在此基础上完善、挑战或通过民主程序做出选择


They also focus on the United States as a starting point, but the conversation—and the solutions—must ultimately be global. The transition to superintelligence is not a distant possibility—it’s already underway, and the choices we make in the near term will shape how its benefits and risks are distributed for decades to come.


这些想法以美国为起点,但对话和解决方案最终必须是全球性的。向超级智能的过渡不是遥远的可能性,它已经在进行中,我们在近期做出的选择将决定其收益和风险在未来几十年如何分配


第一部分:建设开放经济


The promise of advanced AI is that it can benefit everyone by translating abundant intelligence into extraordinary progress. It can lower the cost of essential goods, expand opportunity, and give people more time for what is meaningful, relational, and community-building. It can help solve scientific challenges that still elude human effort: curing or preventing diseases, alleviating food scarcity, strengthening agriculture under climate stress, and speeding up breakthroughs in clean, reliable energy. The benefits of major investments in science could emerge within a single lifetime and reach communities far beyond traditional research hubs.


高级 AI 的承诺是,它可以通过将充裕的智能转化为非凡的进步来惠及所有人。它可以降低基本商品成本,扩大机会,给人们更多时间用于有意义的、关系性的、社区建设的事情。它可以帮助解决人类努力仍未攻克的科学挑战:治愈或预防疾病,缓解粮食短缺,在气候压力下加强农业,加速清洁可靠能源的突破。重大科学投资的收益可以在一代人的时间内涌现,惠及远超传统研究中心的社区


Yet the same capabilities making this progress possible will also disrupt jobs and reshape entire industries at a speed and scale unlike any previous technological shift. Some jobs will disappear, others will evolve, and entirely new forms of work will emerge as organizations learn how to deploy advanced AI.


然而,使这些进步成为可能的同样能力,也将以前所未有的速度和规模颠覆工作岗位并重塑整个行业。一些工作将消失,另一些将演变,随着组织学会如何部署高级 AI,全新形式的工作将出现


These changes will not arrive evenly. Without thoughtful policies, AI could widen inequality by compounding advantages for those already positioned to capture the upside while communities that begin with fewer resources fall further behind, excluded from new tools, new industries, and new opportunities. There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used. Workers using AI might well agree that it’s increasing their productivity without believing they’re seeing the benefits.


这些变化不会均匀到来。没有周全的政策,AI 可能会加剧不平等:为那些已经处于有利位置的人叠加优势,而资源较少的社区进一步落后,被排斥在新工具、新行业和新机遇之外。也存在经济收益集中在少数公司(包括 OpenAI)的风险,即使技术本身变得更强大、使用更广泛。使用 AI 的劳动者可能承认它提高了自己的生产力,但并不认为自己从中获益了


Maintaining an open economy that is easily accessed and participatory will require ambitious policymaking. The enclosed ideas include proposals to ensure that workers have a voice in the AI transition, since workers have deep knowledge about how work is actually performed and where AI can make work better and safer. Other proposals suggest new mechanisms to share returns from AI-driven growth by expanding access to capital, sharing economic gains more widely, and aligning the benefits of AI-enabled growth with higher living standards. And they aim to modernize economic security by helping people navigate transitions, access new opportunities, and maintain stability as work changes.


维持一个易于进入和参与的开放经济将需要大胆的政策制定。文中的方案包括:确保劳动者在 AI 转型中有发言权,因为劳动者对工作实际如何完成最有发言权;提出分享 AI 驱动增长回报的新机制,通过扩大资本获取、更广泛地分享经济收益来实现;以及通过帮助人们应对转型、获取新机会、在工作变化中保持稳定来实现经济安全的现代化


劳动者视角


Worker perspectives. Give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights. Workers have deep knowledge about how work is actually performed and where AI can improve outcomes. They will be critical voices in understanding how AI can be used in workplaces to ensure that technological change will not only lead to improved productivity, but also lead to better jobs and stronger, safer workplaces.


在 AI 转型中给劳动者发言权,让工作更好更安全,包括建立与管理层合作的正式机制,确保 AI 提升工作质量、增强安全、尊重劳动权利。劳动者对工作实际如何完成、AI 在哪里能改善结果有深入的认知。他们将是理解 AI 如何在工作场所使用的关键声音,确保技术变革不仅提高生产力,还带来更好的工作和更安全的工作场所


Allow workers to prioritize AI deployments that improve job quality by eliminating dangerous, repetitive, administrative, or exhausting tasks so employees can focus on higher-value work. At the same time, set clear limits on harmful uses of AI that could erode job quality by intensifying workloads, narrowing autonomy, or undermining fair scheduling and pay.


允许劳动者优先推动那些通过消除危险、重复、行政或繁重任务来改善工作质量的 AI 部署,让员工专注于更高价值的工作。同时,对可能通过加大工作量、缩小自主权或破坏公平排班和薪酬来侵蚀工作质量的 AI 使用设置明确限制


AI 优先的创业者


AI-first entrepreneurs. Help workers turn domain expertise into new companies by using AI to handle the overhead that usually blocks entrepreneurship (e.g., accounting, marketing, procurement). Pair microgrants or revenue-based financing with practical “startup-in-a-box” supports such as model contracts and shared back-office infrastructure so that new small businesses can compete quickly. Worker organizations could serve as enablers by offering training, providing shared services, and helping workers negotiate fair commercial terms and protect IP.


帮助劳动者利用 AI 处理通常阻碍创业的开销(如财务、营销、采购),将领域专长转化为新公司。将小额拨款或收入分成融资与「创业工具箱」支持(如模板合同和共享后台基础设施)结合,让新的小企业能快速参与竞争。工会组织可以充当赋能者的角色:提供培训、共享服务,帮助劳动者谈判公平商业条款和保护知识产权


AI 接入权


Right to AI. Treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy, or to make sure that electricity and the internet reach remote parts of the globe. (The internet still isn’t fairly deployed across the globe or even the US; learn from this and seek to rectify those issues when it comes to AI.) Expand affordable, reliable access to foundational models—the building blocks of modern AI systems—and make a baseline level of capability broadly available, including through free or low-cost access points. Support the education, infrastructure, connectivity, and training needed to use these systems effectively, and make sure that workers, small businesses, schools, libraries, and underserved communities are not excluded from the capabilities that drive productivity and opportunity.


将 AI 接入视为参与现代经济的基础,类似于提高全球识字率的大规模努力,或确保电力和互联网到达偏远地区。(互联网至今在全球甚至美国都没有公平部署,应该从中吸取教训,在 AI 方面纠正这些问题。)扩大对基础模型的可负担、可靠的访问,并使基线能力广泛可用,包括通过免费或低成本的接入点。支持有效使用这些系统所需的教育、基础设施、连接和培训,确保劳动者、小企业、学校、图书馆和服务不足的社区不被排斥在驱动生产力和机会的能力之外


现代化税基


Modernize the tax base. As AI reshapes work and production, the composition of economic activity may shift—expanding corporate profits and capital gains while potentially reducing reliance on labor income and payroll taxes. This could erode the tax base that funds core programs like Social Security, Medicaid, SNAP, and housing assistance—putting them at risk. Tax policy should adapt to ensure these systems remain durable.


随着 AI 重塑工作和生产,经济活动的构成可能发生变化:企业利润和资本收益扩大,而对劳动收入和工资税的依赖可能减少。这可能侵蚀为社会保障、医疗补助、食品券和住房援助等核心项目提供资金的税基,使它们面临风险。税收政策应当适应以确保这些体系持久


Policymakers could rebalance the tax base by increasing reliance on capital-based revenues—such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns—and by exploring new approaches such as taxes related to automated labor. These reforms should be paired with wage-linked incentives that encourage firms to retain, retrain, and invest in workers, similar to existing R&D-style credits. Together, these changes would help stabilize funding for essential programs while supporting workforce transitions in an AI-driven economy.


政策制定者可以通过增加对资本收入的依赖来重新平衡税基,比如对高额资本利得、企业所得征收更高税率,或对持续的 AI 驱动回报实施定向措施,同时探索与自动化劳动相关的新税种。这些改革应与工资挂钩激励配套,鼓励企业留住、再培训和投资于劳动者,类似于现有的研发税收抵免。这些变化将共同帮助稳定基本项目的资金,同时支持 AI 驱动经济中的劳动力转型


公共财富基金


Public Wealth Fund. Create a Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth. While tax reforms help ensure governments can continue to fund essential programs, a Public Wealth Fund is designed to ensure that people directly share in the upside of that growth.


创建公共财富基金,为每个公民(包括那些没有投资金融市场的人)提供 AI 驱动经济增长的份额。税收改革帮助确保政府能继续资助基本项目,而公共财富基金旨在确保人们直接分享增长的上行空间


Policymakers and AI companies should work together to determine how to best seed the Fund, which could invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI. Returns from the Fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital.


政策制定者和 AI 公司应合作确定如何为基金注入种子资金,基金可以投资于多元化的长期资产,捕获 AI 公司和更广泛的采用和部署 AI 的企业的增长。基金的回报可以直接分配给公民,让更多人直接参与 AI 驱动增长的收益,不论其起始财富或资本获取能力


让每个公民都拥有 AI 经济增长的份额


加速电网扩张


Accelerate grid expansion. Establish new public-private partnership models to finance and accelerate the expansion of energy infrastructure required to power AI. Use these models to address financing constraints, permitting delays, and siting risks that have limited high-voltage interstate and interregional transmission—and to deliver infrastructure at speed and scale, limit taxpayer risk, and share the upside with the public. Partnerships should be structured to minimize taxpayer exposure to commercial losses and ensure that expanded energy infrastructure translates into lower energy costs for households and businesses.


建立新的公私合作模式,为 AI 所需的能源基础设施扩张提供融资并加速推进。利用这些模式解决融资约束、审批延迟和选址风险等限制州际和跨区域高压输电的问题,以速度和规模交付基础设施,限制纳税人风险,并与公众分享收益。合作关系的设计应最大限度减少纳税人面临的商业损失风险,并确保扩大的能源基础设施转化为家庭和企业更低的能源成本


效率红利


Efficiency dividends. Convert efficiency gains from AI into durable improvements in workers’ benefits when routine workload declines and operating costs fall, including incentivizing companies to increase retirement matches or contributions, cover a larger share of healthcare costs, and subsidize child and eldercare.


当常规工作量下降和运营成本降低时,将 AI 带来的效率提升转化为劳动者福利的持久改善,包括激励企业增加退休匹配或缴款、承担更大份额的医疗成本、补贴育儿和养老


Incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both.


激励雇主和工会试行每周 32 小时 / 四天工作制,在不减薪、保持产出和服务水平的前提下进行限时试点,然后将回收的工时转化为永久性的缩短工作周、可存储的带薪休假,或两者兼具


自适应安全网


Adaptive safety nets that work for everyone. Make sure the existing safety net works reliably, quickly, and at scale, because if the transition to superintelligence is going to benefit everyone, the systems designed to provide economic and health security need to deliver without delay or gaps. That starts with unemployment insurance, SNAP, Social Security, Medicaid, and Medicare that are not just in place but fully functional, accessible, and responsive to the realities people will face during the transition.


确保现有安全网可靠、快速、大规模地运作。如果向超级智能的过渡要惠及所有人,那么为提供经济和健康安全而设计的系统就必须没有延迟和缺口地交付。这首先意味着失业保险、食品券、社会保障、医疗补助和医疗保险不仅要到位,还必须全面运作、可及,并能回应人们在转型中面对的现实


Next, invest in clear, real-time measurement of how AI is affecting work, wages, job quality, and sectoral dynamics, using public metrics such as unemployment rates and indicators of regional or industry-specific displacement. These systems should provide policymakers with timely visibility into where disruption is occurring and how severe it is.


其次,投资于对 AI 如何影响工作、工资、工作质量和行业动态的清晰实时衡量,使用失业率和区域或行业特定位移指标等公共指标。这些系统应为政策制定者提供对颠覆发生在哪里、严重程度如何的及时可见性


Then, define a package of temporary, expanded safety nets (e.g., expanded or more flexible unemployment benefits, fast cash assistance, wage insurance, training vouchers) that activates automatically when these metrics exceed pre-defined thresholds. When disruption rises above those levels, support would scale up; as conditions stabilize, it would phase out. This ensures that assistance is targeted, time-bound, and proportional to the scale of disruption, and also avoids a permanent expansion of programs.


然后,定义一套临时性的扩展安全网(如扩大或更灵活的失业救济、快速现金援助、工资保险、培训券),当指标超过预设阈值时自动启动。颠覆加剧时支持升级,情况稳定时逐步退出。这确保了援助是有针对性的、有时间限制的、与颠覆规模成比例的,也避免了项目的永久性扩张


可携带福利


Portable benefits. Over time, build benefit systems that are not tied to a single employer by expanding access to healthcare, retirement savings, and skills training through portable accounts that follow individuals across jobs, industries, education programs, and entrepreneurial ventures. Public programs can decouple key benefits from employment status by expanding access to retirement and training support regardless of where or how someone works. Implementation can run through portable benefit platforms that pool contributions from multiple sources and route them into standardized accounts attached to the individual, not the job. Retirement systems can also be modernized through pooled structures that allow workers to accrue benefits continuously across employers, reducing gaps and preserving continuity over time.


逐步建立不绑定单一雇主的福利体系,通过可携带账户扩大医疗、退休储蓄和技能培训的获取,这些账户跟随个人跨越工作、行业、教育项目和创业活动。公共项目可以通过扩大退休和培训支持的获取来将关键福利与就业状态脱钩,不论一个人在哪里或如何工作。实施可以通过可携带福利平台进行,汇集来自多个来源的缴款并将其导入绑定个人而非工作岗位的标准化账户。退休体系也可以通过汇集结构进行现代化,让劳动者跨雇主持续积累福利,减少缺口并保持连续性


面向以人为本工作的通道


Pathways into human-centered work. Expand opportunities in the care and connection economy—childcare, eldercare, education, healthcare, and community services—as pathways for workers displaced by AI. Although AI can enhance these roles by reducing administrative burdens and enabling greater personalization, human connection will remain an essential part of the profession. As AI reshapes the labor market, these sectors can absorb transitioning workers if supported with investments in training, wages, and job quality. Governments can build training pipelines, support transitions into care roles, and incentivize employers to raise pay and improve conditions in fields facing chronic shortages.


扩大关爱和连接经济中的机会:育儿、养老、教育、医疗和社区服务,作为被 AI 替代的劳动者的转型通道。虽然 AI 可以通过减少行政负担和实现更大的个性化来增强这些角色,但人际连接仍将是这些职业的核心部分。随着 AI 重塑劳动力市场,如果有培训、薪资和工作质量方面的投资支持,这些行业可以吸收转型中的劳动者。政府可以建设培训管道,支持向护理角色的转型,激励雇主在面临长期短缺的领域提高薪资和改善条件


These initiatives could be complemented with a family benefit that recognizes caregiving as economically valuable work and supports evolving work patterns. This benefit could help cover childcare, education, and healthcare while remaining compatible with part-time work, retraining, or entrepreneurship. Together, these efforts would expand access to care, strengthen communities, and create meaningful, human-centered work.


这些举措可以与一项家庭福利相结合,该福利承认照顾工作是有经济价值的劳动,并支持不断演变的工作模式。这项福利可以帮助覆盖育儿、教育和医疗,同时与兼职工作、再培训或创业兼容。这些努力将共同扩大护理服务的获取,加强社区,并创造有意义的、以人为本的工作


加速科学发现并推广收益


Accelerate scientific discovery and scale the benefits. Build a distributed network of AI-enabled laboratories to dramatically expand the capacity to test and validate AI-generated hypotheses at scale. These labs would integrate AI systems directly into experimental workflows by automating routine processes, capturing high-quality data, and enabling rapid iteration between hypothesis generation and testing.


建设分布式的 AI 赋能实验室网络,大幅扩展大规模测试和验证 AI 生成假说的能力。这些实验室将 AI 系统直接集成到实验工作流中,自动化常规流程,采集高质量数据,实现假说生成和测试之间的快速迭代


Then, build the physical systems and infrastructure needed to translate validated discoveries into real-world use at scale. This includes expanding the capacity of organizations to deploy new technologies, upgrading facilities and systems required for implementation, and aligning financing and incentives to support adoption. It also includes a sustained investment in people: training scientists, technicians, and operators to contribute to AI-enabled science. These investments ensure that breakthroughs move beyond laboratories and into widespread use, while strengthening the workforce and operational systems required to build, maintain, and run the infrastructure that supports AI-enabled discovery. Both laboratory and production infrastructure should be deployed broadly across universities, community colleges, hospitals, and regional research hubs, not concentrated in a small number of elite institutions.


然后,建设将经过验证的发现转化为大规模实际应用所需的物理系统和基础设施。这包括扩大组织部署新技术的能力,升级实施所需的设施和系统,以及调整融资和激励以支持采纳。还包括对人的持续投资:培训科学家、技术人员和操作员以参与 AI 赋能的科学。这些投资确保突破从实验室走向广泛应用,同时加强支持 AI 赋能发现所需的劳动力和运营系统。实验室和生产基础设施都应广泛部署在大学、社区学院、医院和区域研究中心,而不是集中在少数精英机构


第二部分:建设有韧性的社会


As AI systems become more capable and more embedded across the economy, they may introduce new vulnerabilities alongside new abundance. Some systems may be misused for cyber or biological harm. Others may create new pressures on social and emotional well-being, including for young people, if deployed without adequate safeguards. AI systems may act in ways that are misaligned with human intent or operate beyond meaningful human oversight. And as advanced AI reshapes how people, organizations, and governments operate, it may place new strain on the institutions and norms that societies rely on to remain stable, secure, and free.


随着 AI 系统变得更强大、更深入地嵌入经济,它们可能在带来新丰裕的同时引入新的脆弱性。一些系统可能被滥用于网络或生物危害。另一些如果没有充分保障就部署,可能对社会和情感健康(包括青少年)造成新的压力。AI 系统可能以与人类意图不一致的方式行事,或超出有意义的人类监督。随着高级 AI 重塑人、组织和政府的运作方式,它可能对社会赖以保持稳定、安全和自由的制度和规范施加新的压力


We should be clear-eyed about the resilience required here. These new risks won’t be isolated or suitable for addressing one at a time—AI will reshape how work is performed, how decisions are made, how organizations operate, and how states interact. Building resilience therefore means making sure people and institutions can adapt quickly, maintain meaningful agency over how these systems are used, and preserve broadly shared prosperity even as economic and social structures evolve.


我们应该对所需的韧性保持清醒。这些新风险不会是孤立的或适合逐一应对的:AI 将重塑工作方式、决策方式、组织运作方式以及国家互动方式。因此,建设韧性意味着确保人和机构能快速适应,对这些系统的使用方式保持有意义的自主权,并在经济和社会结构演变时保持广泛共享的繁荣


Over the past several years, leading AI developers including OpenAI have focused heavily on upstream safeguards: development of global standards, transparency around evaluations, mitigations, and risks, and investments in model testing, red teaming, and usage policies designed to identify and mitigate risks before deployment. Policymakers have also focused here, codifying requirements in the EU AI Act and in US state-based regulation. These upstream efforts should continue.


过去几年,包括 OpenAI 在内的领先 AI 开发者大量关注上游保障:制定全球标准,围绕评估、缓解措施和风险的透明度,以及投资于模型测试、红队和使用政策,旨在部署前识别和缓解风险。政策制定者也在这方面着力,在欧盟 AI 法案和美国州级法规中将要求编入法律。这些上游努力应该继续


But as AI systems become more capable and more widely deployed, resilience will also depend upon what happens after deployment—when systems must be monitored in real time, operate under uncertainty, and integrate into institutions not designed for agentic workflows.


但随着 AI 系统变得更强大、更广泛部署,韧性也将取决于部署之后发生的事情:当系统必须实时监控、在不确定性下运行、并集成到不是为 Agent 工作流设计的机构中时


This is not a new challenge. As electricity spread, societies built safety standards and regulatory institutions. As automobiles transformed mobility, safety systems reduced risk while preserving freedom of movement. In aviation, continuous monitoring and coordinated response systems made flying one of the safest forms of transportation. In food and medicine, testing and post-market surveillance helped ensure safety in everyday use. In each case, resilience was not automatic—it was built with the luxury of time.


这不是一个新挑战。电力普及时,社会建立了安全标准和监管机构。汽车改变出行时,安全系统降低了风险同时保留了出行自由。航空领域,持续监控和协调响应系统使飞行成为最安全的交通方式之一。食品和药品领域,测试和上市后监测帮助确保了日常使用中的安全。在每种情况下,韧性都不是自动产生的,而是在时间的从容中建设的


As we move toward superintelligence, building a resilient society will require a similar but speedier effort that kicks into gear now. The ideas below are a slate of ambitious approaches to building a more resilient society. They focus on building and scaling safety systems that operate in real-world conditions by establishing mechanisms for trust, accountability, and auditing. They suggest opportunities for strengthening governance so that advanced AI remains controllable, transparent, and aligned with democratic values. And they suggest approaches to improve coordination across companies, governments, and countries so that risks can be identified early, information can be shared, and responses can be executed quickly when needed. Together, these proposals extend important safety work already underway and represent initial ideas to keep AI safe, governable, and aligned with democratic values.


向超级智能迈进的过程中,建设有韧性的社会将需要类似但更快速的努力,而且现在就要启动。以下是一系列建设更有韧性社会的大胆方案。它们聚焦于通过建立信任、问责和审计机制来构建和扩展在真实世界条件下运行的安全系统。它们提出了加强治理的机会,使高级 AI 保持可控、透明,并与民主价值一致。它们还提出了改善公司、政府和国家之间协调的方法,以便尽早识别风险、共享信息,并在需要时快速执行应对。这些提案共同延续了已经在进行中的重要安全工作,代表了保持 AI 安全、可治理和与民主价值一致的初步想法


应对新兴风险的安全系统


Safety systems for emerging risks. Research and develop tools that protect models, detect risks, and prevent misuse across high-consequence domains, including cyber and biological risks as well as other pathways to large-scale harm. Expand the use of advanced AI systems for threat modeling, red teaming, net assessments, and robustness testing to identify and anticipate novel risks early and inform mitigation strategies. Develop and scale complementary protective systems; for example, rapid identification and production of medical countermeasures in the event of an outbreak and expanded strategic stockpiles to prepare for future risks. Then, catalyze competitive safety markets by creating sustained demand for these capabilities through procurement, standards, insurance frameworks, and advance-purchase commitments. Over time, this approach can make safeguards an output of innovation and competition, ensuring that defenses improve as quickly as the risks they are designed to address.


研发保护模型、检测风险和防止滥用的工具,覆盖高后果领域,包括网络和生物风险以及其他大规模伤害途径。扩大高级 AI 系统在威胁建模、红队、净评估和鲁棒性测试中的使用,以尽早识别和预测新型风险。开发和扩展互补保护系统,比如在疫情爆发时快速识别和生产医疗对策,以及扩大战略储备以应对未来风险。然后,通过采购、标准、保险框架和预购承诺创造对这些能力的持续需求,催化竞争性的安全市场。随着时间推移,这种方法可以使保障措施成为创新和竞争的产出,确保防御措施与其所针对的风险同步改进


AI 信任栈


AI trust stack. Research and develop systems that help people trust and verify AI systems, the content they produce, and the actions they take—especially as these systems take on more real-world responsibilities. Advance the development of provenance and verification standards and tools that can build trust in AI systems while preserving privacy. This could include enabling secure, verifiable signatures for actions such as generating content or issuing instructions, and developing privacy-preserving logging and audit systems capable of supporting investigation and accountability without enabling pervasive surveillance.


研发帮助人们信任和验证 AI 系统、其产出内容和采取行动的系统,尤其是当这些系统承担更多现实世界职责时。推进溯源和验证标准及工具的开发,在保护隐私的同时建立对 AI 系统的信任。这可以包括为生成内容或发出指令等行为提供安全、可验证的签名,以及开发能支持调查和问责但不会导致普遍监控的隐私保护日志和审计系统


These types of solutions should capture key information about system behavior and use while minimizing the collection of sensitive data, and be designed to support investigation or intervention under clearly defined legal or safety conditions. This work could also include developing and testing governance frameworks that clarify responsibility within organizations, including how accountability could be assigned to specific roles and how delegation, monitoring, and escalation processes could function as systems become more capable. Over time, these efforts could establish a foundation for accountability by building trust in AI interactions and helping ensure that when harm occurs, responsibility can be appropriately allocated.


这类解决方案应在最小化敏感数据收集的同时捕获关于系统行为和使用的关键信息,并被设计为在明确定义的法律或安全条件下支持调查或干预。这项工作还可以包括开发和测试治理框架,明确组织内部的责任,包括如何将问责分配给特定角色,以及随着系统变得更强大,委托、监控和升级流程如何运作。随着时间推移,这些努力可以通过在 AI 交互中建立信任并帮助确保当伤害发生时责任能被适当分配来建立问责的基础


审计制度


Auditing regimes. Strengthen institutions such as the Center for AI Standards and Innovation (CAISI) to develop auditing standards for frontier AI risks in coordination with national security agencies. Use tools such as government procurement, advance-purchase commitments, insurance frameworks, and standards-setting to create and scale a competitive market of auditors and evaluators capable of assessing AI systems and products for safety and security risks, building auditing capacity alongside the technology. Standards should be designed for international adoption to reduce fragmentation and avoid creating unnecessary compliance burdens for small companies, as well as those operating across jurisdictions.


强化 AI 标准与创新中心(CAISI)等机构,与国家安全机构协调制定前沿 AI 风险的审计标准。利用政府采购、预购承诺、保险框架和标准制定等工具,创建和扩大能够评估 AI 系统和产品安全与安保风险的审计师和评估师竞争性市场,使审计能力与技术同步增长。标准应为国际采纳而设计,减少碎片化,避免为小公司和跨辖区运营的公司造成不必要的合规负担


As we progress toward superintelligence, there may come a point where a narrow set of highly capable models—particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks—require stronger controls, including pre- and post-deployment audits using the standards developed in advance. Apply these requirements only to a small number of companies and the most advanced models, preserving a vibrant ecosystem of less powerful systems and the startups building on them. This approach maintains broad access to general-purpose AI while applying targeted safeguards where failures could create the greatest harm, avoiding unnecessary barriers that could limit competition or enable regulatory capture.


随着向超级智能推进,可能到达一个节点,少数高度能干的模型(特别是那些可能实质性推进化学、生物、放射、核或网络风险的模型)需要更严格的控制,包括使用预先制定标准的部署前和部署后审计。这些要求仅适用于少数公司和最先进的模型,保留较弱系统和基于它们构建的初创企业的活跃生态。这种方法保持了对通用 AI 的广泛访问,同时在失败可能造成最大伤害的地方实施有针对性的保障,避免可能限制竞争或导致监管捕获的不必要壁垒


模型遏制手册


Model-containment playbooks. Develop and test coordinated playbooks to contain dangerous AI systems once they have been released into the world. As AI capabilities advance, societies may face scenarios where dangerous systems cannot be easily recalled—because model weights have been released, developers are unwilling or unable to limit access to dangerous capabilities, or the systems are autonomous and capable of replicating themselves. In these cases, the challenge is containment: limiting the spread of dangerous capabilities, reducing harm, and coordinating responses under real-world constraints. Experience from other high-consequence domains, such as cybersecurity and public health, shows that even when full containment is not possible, coordinated action can still meaningfully reduce impact.


制定和测试协调手册,在危险 AI 系统已经释放到世界后进行遏制。随着 AI 能力推进,社会可能面临危险系统无法轻易召回的情景:模型权重已经公开,开发者不愿或无法限制对危险能力的访问,或系统是自主的且能自我复制。在这些情况下,挑战是遏制:限制危险能力的扩散,减少伤害,在现实世界约束下协调响应。网络安全和公共卫生等其他高后果领域的经验表明,即使完全遏制不可能,协调行动仍能有意义地减少影响


使命对齐的公司治理


Mission-aligned corporate governance. Frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making, such as Public Benefit Corporations with mission-aligned governance. These structures should include explicit commitments to ensure that the benefits of AI are broadly shared, including through significant, long-term philanthropic or charitable giving. At the same time, harden frontier systems against corporate or insider capture by securing model weights and training infrastructure, auditing models for manipulative behaviors or hidden loyalties, and monitoring high-risk deployments so no individual or internal faction can quietly use AI systems to concentrate power.


前沿 AI 公司应采用将公共利益问责嵌入决策的治理结构,如使命对齐治理的公共利益公司。这些结构应包含明确承诺,确保 AI 的收益广泛共享,包括通过重大的长期慈善捐赠。同时,通过保护模型权重和训练基础设施、审计模型是否存在操纵行为或隐藏忠诚度、监控高风险部署,使前沿系统免受企业或内部人员捕获,确保没有个人或内部派系能悄悄利用 AI 系统来集中权力


政府使用 AI 的护栏


Guardrails for government use. Have policymakers establish clear rules for how governments can and cannot use AI, with especially high standards for reliability, alignment, and safety. These standards should be codified in law and reinforced through technical safeguards. At the same time, use AI to strengthen democratic accountability. As more government decisions are made through AI-assisted workflows, these systems will create clearer digital records of government reasoning and action that can be logged alongside other public records. With appropriate safeguards, oversight institutions such as inspectors general, congressional committees, and courts could use AI-enabled auditing tools to detect abuse, identify harms, and improve accountability at scale.


由政策制定者建立关于政府如何使用和不使用 AI 的明确规则,对可靠性、对齐性和安全性设置特别高的标准。这些标准应被编入法律并通过技术保障加以强化。同时,利用 AI 加强民主问责。随着更多政府决策通过 AI 辅助工作流做出,这些系统将创建更清晰的政府推理和行动的数字记录,可以与其他公共记录一起归档。在适当的保障下,监察长、国会委员会和法院等监督机构可以使用 AI 赋能的审计工具来检测滥用、识别伤害,并大规模提升问责能力


Also, modernize transparency frameworks (including the Freedom of Information Act) to allow citizens and watchdog organizations to use AI to review targeted questions about government actions while protecting sensitive information. This could include clarifying when AI-interaction logs and agentic action logs constitute federal records that must be retained for specified periods.


此外,现代化透明度框架(包括信息自由法),允许公民和监督组织使用 AI 审查关于政府行为的针对性问题,同时保护敏感信息。这可以包括明确 AI 交互日志和 Agent 行动日志何时构成必须保留指定期限的联邦记录


公众意见输入机制


Mechanisms for public input. Create structured ways for public input so that alignment isn’t defined only by engineers or executives behind closed doors. As advanced AI makes more decisions that affect people’s lives, societies need shared clarity about what these systems are supposed to do, what values should guide them, and how well they are performing. Make alignment more democratic, legible, and accountable through transparent specifications, evaluation frameworks, and representative input processes. Developers should publish model specifications that describe how systems are intended to behave and share information about how those systems are evaluated. Governments and public institutions should help shape these standards by anchoring them in democratic laws and values, while establishing mechanisms for representative public input to be considered alongside traditional business stakeholders. Together, these approaches help ensure that the advancement of AI reflects the perspectives of the societies that must live with its consequences.


创建结构化的公众意见输入渠道,使对齐不仅仅由工程师或高管在闭门会议中定义。随着高级 AI 做出越来越多影响人们生活的决策,社会需要就这些系统应该做什么、什么价值观应指导它们、以及它们表现如何达成共同的清晰认知。通过透明的规格说明、评估框架和代表性输入流程,使对齐更加民主、可读和可问责。开发者应发布描述系统预期行为的模型规格书,并分享系统评估的信息。政府和公共机构应通过将这些标准锚定在民主法律和价值观中来帮助塑造它们,同时建立机制让代表性的公众意见与传统商业利益相关者一起被考虑。这些方法共同帮助确保 AI 的发展反映必须与其后果共存的社会的视角


事件报告


Incident reporting. Establish a mechanism for companies to share information about incidents, misuse, and near-misses with a designated public authority. The system should emphasize learning and prevention over punishment, with appropriately scoped public disclosures that ensure transparency and democratic oversight while protecting sensitive technical, national security, and competitive information. Near-miss reporting could include cases where models exhibited concerning internal reasoning, unexpected capabilities, or other warning signals—even if safeguards ultimately prevented harm—so the ecosystem can learn from close calls before they become real incidents.


建立企业向指定公共机构共享事件、滥用和未遂事件信息的机制。该系统应强调学习和预防而非惩罚,通过适当范围的公开披露确保透明和民主监督,同时保护敏感的技术、国家安全和商业竞争信息。未遂事件报告可以包括模型表现出令人担忧的内部推理、意外能力或其他警告信号的案例,即使保障措施最终防止了伤害,生态系统也可以在险情变成真正事故之前从中学习


国际信息共享


International information-sharing around AI capabilities, risks, and mitigations. Strengthen national evaluation institutions as the foundation for international coordination, beginning with expanding the role of the CAISI as a trusted technical body for evaluating frontier systems, assessing safeguards, and informing government understanding of advanced AI capabilities. Building on this foundation, develop a global network of AI Institutes that collaborate through shared protocols for information exchange, joint evaluations, and coordinated mitigation measures.


围绕 AI 能力、风险和缓解措施的国际信息共享。以强化国家评估机构作为国际协调的基础,首先扩大 CAISI 作为评估前沿系统、评估保障措施和促进政府理解高级 AI 能力的可信技术机构的角色。在此基础上,发展一个全球 AI 研究所网络,通过共享的信息交换协议、联合评估和协调缓解措施进行合作


Over time, this network could evolve into an international framework akin to the other multilateral institutions focused on safety and standards, one that gives trusted public authorities visibility into frontier AI development; and creates secure cross-lab and cross-country channels for sharing evaluation results, alignment findings, and emerging risks; and likewise supports communicating during crises. To enable effective collaboration, policymakers should ensure that companies can share safety- and risk-related information through these channels without running afoul of antitrust or competition constraints, using clear safe harbors and narrowly scoped information-sharing rules. This system should expand beyond a narrow focus on national security to include a broader range of societal risks, including impacts on youth safety and well-being.


随着时间推移,这一网络可以演变为类似于其他专注于安全和标准的多边机构的国际框架:给可信的公共机构提供对前沿 AI 开发的可见性,创建安全的跨实验室和跨国渠道用于分享评估结果、对齐发现和新兴风险,并同样支持危机期间的沟通。为实现有效合作,政策制定者应确保企业能通过这些渠道分享安全和风险相关信息,而不违反反垄断或竞争约束,使用明确的安全港和范围窄小的信息共享规则。该系统应扩展到超越对国家安全的狭隘关注,纳入更广泛的社会风险,包括对青少年安全和福祉的影响


开启对话


We offer these ideas not as fixed answers but as a starting point for a broader conversation about how to ensure that AI benefits everyone. That conversation should be inclusive and ongoing—engaging governments, companies, researchers, civil society, communities, and families—and should be mediated through democratic processes that give people real power to shape the AI future they want. It also needs to expand globally—bringing in the perspectives of cultures, societies, and governments around the world.


我们提出这些想法不是作为固定答案,而是作为关于如何确保 AI 惠及所有人的更广泛对话的起点。这场对话应该是包容的和持续的,纳入政府、企业、研究者、公民社会、社区和家庭,并应通过赋予人们真正权力来塑造他们想要的 AI 未来的民主程序来进行。它也需要扩展到全球,引入世界各地文化、社会和政府的视角


These ideas are our first contribution to that effort, but only the beginning. Progress will depend on continued iteration, experimentation, and collaboration across institutions and sectors. To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.


这些想法是我们对这一努力的第一份贡献,但只是开始。进展将取决于跨机构和跨部门的持续迭代、实验和合作。为维持势头,OpenAI 正在:(1)通过 newindustrialpolicy@openai.com 收集和组织反馈;(2)设立试点项目,提供最高 10 万美元的研究金和最高 100 万美元的 API 额度,资助基于这些政策构想的研究;(3)在 5 月将在华盛顿特区开设的新 OpenAI Workshop 召集讨论


文章来自于"赛博禅心",作者 "金色传说大聪明"。

AITNT-国内领先的一站式人工智能新闻资讯网站
AITNT资源拓展
根据文章内容,系统为您匹配了更有价值的资源信息。内容由AI生成,仅供参考
1
AI工作流

【开源免费】字节工作流产品扣子两大核心业务:Coze Studio(扣子开发平台)和 Coze Loop(扣子罗盘)全面开源,而且采用的是 Apache 2.0 许可证,支持商用!

项目地址:https://github.com/coze-dev/coze-studio


【开源免费】n8n是一个可以自定义工作流的AI项目,它提供了200个工作节点来帮助用户实现工作流的编排。

项目地址:https://github.com/n8n-io/n8n

在线使用:https://n8n.io/(付费


【开源免费】DB-GPT是一个AI原生数据应用开发框架,它提供开发多模型管理(SMMF)、Text2SQL效果优化、RAG框架以及优化、Multi-Agents框架协作、AWEL(智能体工作流编排)等多种技术能力,让围绕数据库构建大模型应用更简单、更方便。

项目地址:https://github.com/eosphoros-ai/DB-GPT?tab=readme-ov-file



【开源免费】VectorVein是一个不需要任何编程基础,任何人都能用的AI工作流编辑工具。你可以将复杂的工作分解成多个步骤,并通过VectorVein固定并让AI依次完成。VectorVein是字节coze的平替产品。

项目地址:https://github.com/AndersonBY/vector-vein?tab=readme-ov-file

在线使用:https://vectorvein.ai/付费

2
智能体

【开源免费】AutoGPT是一个允许用户创建和运行智能体的(AI Agents)项目。用户创建的智能体能够自动执行各种任务,从而让AI有步骤的去解决实际问题。

项目地址:https://github.com/Significant-Gravitas/AutoGPT


【开源免费】MetaGPT是一个“软件开发公司”的智能体项目,只需要输入一句话的老板需求,MetaGPT即可输出用户故事 / 竞品分析 / 需求 / 数据结构 / APIs / 文件等软件开发的相关内容。MetaGPT内置了各种AI角色,包括产品经理 / 架构师 / 项目经理 / 工程师,MetaGPT提供了一个精心调配的软件公司研发全过程的SOP。

项目地址:https://github.com/geekan/MetaGPT/blob/main/docs/README_CN.md