Who Owns What AGI Creates? Nobody. That's the Problem.
Creatorship of AGI
David Pedersen / NW AGI Group April 2026
Abstract
Roman Yampolskiy's 2022 paper "Ownability of AGI" identified multiple structural barriers to AGI ownership — principally unpredictability, unexplainability, and uncontrollability, alongside self-modification, ease of theft, and contested AI rights — concluding that AGI, as a class of system, probably cannot be owned in any legally meaningful sense [9]. That paper asked whether AGI could be an object of ownership. This paper asks the harder follow-on: when AGI is the subject doing the creating, what is intellectual property even for? Five structural obstacles are identified: the Authorship Void, the Inventorship Fiction, the Training Data Ratchet, the Incentive Collapse, and the Compute Concentration Problem. Each represents not an edge case to be legislated around, but a logical incompatibility between AGI's properties and IP law's foundational assumptions. Current legal proposals are surveyed and found to address symptoms rather than structure. Three possible futures are sketched. The paper does not resolve the question it raises. Whether IP's dissolution at AGI scale is a governance failure demanding new frameworks, or a revelation that creative ownership was always a legal fiction imposed on an essentially commons activity, is left open.
Keywords: AGI, intellectual property, copyright, patent, authorship void, inventorship fiction, training data, incentive collapse, compute concentration, DABUS, CDPA §9(3), creative commons
1. Introduction
Intellectual property law is a theory about what human beings deserve when they create things, and that premise, never stated explicitly in most legislative frameworks, is the load-bearing wall the whole structure rests on. Copyright grants authors time-limited control over their expression on the premise that this control funds the next creative act and eventually enriches the public domain. Patent grants inventors temporary exclusivity over disclosed methods on the premise that public disclosure—the quid pro quo—advances the general state of knowledge. Trade secret protects competitive advantage earned through sustained human effort. Remove the human origin, and every downstream mechanism (the grant of rights, the duration, the transfer, the infringement calculus) loses its justification. The scaffolding stands; the wall is gone.
Advanced AI systems have been testing this assumption at the margins for a decade. Can an image generator be an "author"? Can a drug discovery AI be an "inventor"? Courts and patent offices have answered these edge cases, mostly by reaffirming the human requirement and sending the parties back to find a human to put in the box. The answers were legally defensible. They were also evasions of the structural question.
That question is arriving. In January 2026, at the World Economic Forum in Davos, Dario Amodei of Anthropic described systems that would exceed human capability at "basically everything" within one to two years [1]. Demis Hassabis of Google DeepMind placed the same threshold at five to ten years and described the transition as likely to be abrupt [2]. Greg Brockman of OpenAI estimated the field is "70 to 80 percent of the way" to AGI. Not everyone finds these timelines credible. Cognitive scientist Gary Marcus, in his October 2025 Royal Society keynote, described current large language models as "deeply flawed imitators" and argued that scaling laws are empirical observations, not laws of nature; they ran out before, and they will run out again [12]. Marcus may be right. His critique does not dissolve the IP question; it relocates it. If current systems are already generating commercially valuable creative and inventive output at scale, as they demonstrably are, then IP law's foundational assumptions are already under stress, regardless of whether AGI proper is one year away or twenty. The margin is becoming the norm on a timeline that IP law cannot track.
This paper takes that implication seriously and asks what happens to IP law when AGI, understood as not an AI tool or AI-assisted workflow but a general system capable of performing cognitive tasks at or above human level across domains, does the creating. The answer is not that IP law needs adjustment. The answer, argued here, is that IP law's rationales dissolve simultaneously.
Sections 2 through 4 proceed as follows: Section 2 surveys existing legal proposals for AI authorship and inventorship and identifies where each fails at scale. Section 3 presents five structural obstacles (the paper's central argument). Section 4 sketches three possible futures and closes with the question the paper is not in a position to answer.
A note on prior work: this paper is a sequel to Yampolskiy [9]. That paper asked whether AGI can be owned and concluded, through three impossibility results, that it probably cannot. A companion preprint [13] examines the legal mechanisms specifically — unexplainability, unpredictability, self-modification, AI rights, and model theft — and concludes that ownership of advanced AI cannot be established "beyond reasonable doubt." This paper accepts those results and extends the analysis to the output of AGI systems. Both papers share a method (impossibility-style analysis of structural incompatibilities) and a conclusion: the legal categories available do not fit the thing they are being asked to govern.
2. Proposals for AI Authorship and Inventorship
The law has not been passive. Each major jurisdiction has proposed or implemented something, and each proposal survives only by avoiding the question it is supposed to answer.
AI-as-tool (US standard). The US Copyright Office requires that a human author make a significant creative contribution to a work, a position developed across three policy reports published in 2024–2025: Part 1 on digital replicas (July 2024), Part 2 on the copyrightability of AI-generated outputs (January 2025), and Part 3 on generative AI training data (May 2025, pre-publication) [3]. Under this framework, AI is a sophisticated tool, like a camera or a word processor. The human who selects, arranges, and modifies AI output retains copyright in those contributions. This approach has real merit at current capability levels. When a human author uses a generative AI to produce drafts that the author then substantially revises, the significant contribution test captures something real. The problem is calibration: the test is set to current capability levels, not to AGI. When a one-sentence prompt produces a complete novel indistinguishable from human-authored work, the significant contribution is the one sentence. The doctrine can survive this by continuing to lower the threshold, but at some point the threshold disappears, and the doctrine survives in name only.
UK CDPA §9(3)—computer-generated works. The UK's Copyright, Designs and Patents Act of 1988 contains a provision that most IP scholars treated as historical curiosity until recently: computer-generated works receive copyright protection, and the owner is "the person by whom the arrangements necessary for the creation of the work are undertaken" [3]. No human authorship is required. The operator gets the right by virtue of having made the arrangements. This is the only major jurisdiction that gives copyright to computer-generated works under existing law, a provision written for deterministic software (spreadsheet outputs, generated reports, computer-typeset publications). Applied to AGI, §9(3) would assign the copyright in every AGI-generated novel, scientific paper, and piece of music to the company that runs the model. This is not a democratization of creative rights. It is a legal mechanism that, scaled globally, would concentrate all creative output of the AGI era in the oligopoly of frontier model providers. The UK has an active UKIPO consultation ongoing about whether the provision should be retained, reformed, or repealed. No other major jurisdiction has followed.
Platform Terms of Service ownership. In the absence of IP law covering AI output in most jurisdictions, platforms have moved to fill the gap contractually. OpenAI's Terms of Service assign output to the user; other platforms retain various licenses. This is contractual claim, not legal right, and contract law cannot bind third parties who are not party to the agreement. If someone copies AI-generated output without permission, the platform's ToS provides no cause of action against them. ToS-based ownership is not a substitute for IP law; it is a stopgap that resolves ownership between the platform and its users while leaving the rest of the world ungoverned.
DABUS and the rejection of AI inventorship. Between 2019 and 2023, Stephen Thaler filed patent applications in the US, UK, EU, and Australia naming DABUS—an AI system—as the inventor. All four jurisdictions rejected the applications [5]: the US Federal Circuit (Thaler v. Vidal, 43 F.4th 1207, 2022), the UK Supreme Court (Thaler v. Comptroller-General, [2023] UKSC 49), the Full Federal Court of Australia (Commissioner of Patents v. Thaler, [2022] FCAFC 62), and the EPO Legal Board of Appeal (J 0008/20, 2021). A fifth ruling reinforced the estoppel dimension: when Thaler subsequently attempted to assert his own inventorship, the UK High Court held he was bound by his prior representation that DABUS was the inventor (Thaler v. Comptroller-General [2025] EWHC 2202 (Ch)). The decisions were unanimous, well-reasoned, and made when AI-assisted invention was still marginal to the patent system. The DABUS cases settled the legal question for now. They did not address the practical question of what happens when the majority of a company's patentable output is generated by AGI, and the company's lawyers must certify, in every application, that a named human inventor "conceived" the claimed invention. Correct as law. Unexamined as systemic consequence.
EU text and data mining exception. The EU's regulatory framework creates an opt-out system for text and data mining: rights holders can prohibit their content from being used to train AI systems [3]. This addresses the input side of the AI IP problem (the training data question), not the output side. The EU exception does not determine who, if anyone, owns what AGI produces. It also creates a structural coordination problem: rights holders who opt out collectively degrade the quality of available training data, while rights holders who do not opt out find their work feeding systems that compete directly with them. The opt-out is a partial response to the training data ratchet (see Section 3.3) that does not engage the authorship void, the inventorship fiction, or the incentive collapse.
What unites all five proposals is the same evasion: each addresses a symptom without engaging the structural diagnosis. AI-as-tool assumes the human contribution remains legally meaningful. UK §9(3) concentrates rather than distributes rights. ToS ownership leaves third parties ungoverned. DABUS leaves mass perjury unaddressed. EU opt-out ignores output entirely. Each proposal is defensible in isolation. None is adequate to the structural problem.
3. Obstacles to IP in the Age of AGI
Five structural obstacles follow: not edge cases to legislate around, but logical incompatibilities between AGI's properties and IP law's requirements, presented in the style of Yampolskiy's 2022 impossibility results as analytic conclusions that follow from the premises.
3.1 The Authorship Void
IP rights require a human creative trigger. Copyright attaches when a human author makes an original selection from among possible expressions. The selection need not be inspired or skillful, only the author's own, reflecting some minimal degree of creative judgment rather than mechanical production. Feist Publications v. Rural Telephone Service (1991) established this for US copyright; the Berne Convention's "intellectual creation" requirement reflects the same logic internationally.
AGI eliminates the trigger. A sufficiently capable generative system does not select from among possibilities in the way a human author selects. It does not make choices in a context of experienced uncertainty, with intentions and values at stake. Whether this constitutes a difference in kind or merely in degree is a philosophical question the law has not engaged seriously. What the law has engaged is the practical result: when there is no human author, there is no copyright. The work enters the public domain.
At current AI capability levels, this is a manageable edge case. At AGI scale, where the majority of new written, visual, musical, scientific, and inventive output may be generated by AI systems without significant human contribution, the authorship void is not an edge case. It is the default condition.
The void creates a vacuum. Into that vacuum flows de facto control: whoever operates the model controls the output, not through legal rights but through compute access and contractual Terms of Service. The authorship void, at scale, concentrates control of creative output in the companies running frontier models more completely than any IP regime has managed, because IP law at least can be challenged, transferred, licensed, and expired. ToS control over compute access has none of those properties.
The authorship void is the most direct structural obstacle. Every other obstacle in this section is, in some sense, a consequence of it, or a related failure of IP law's triggering assumptions.
3.2 The Inventorship Fiction
Patent law requires a human inventor. The requirement is not incidental. Inventorship determines who can apply for a patent, who must be listed as inventor (under penalty of fraud), who can assign rights, and on whom the patent's validity ultimately rests. The DABUS decisions confirmed that the requirement is firm and not subject to judicial relaxation: legislative change would be required.
The requirement was designed for a world in which invention happened in human minds. It survived the transition to computer-assisted invention because, in that world, humans still had the flash of creative insight; the computers merely assisted with calculation, simulation, or literature search. The concept of "conception" in patent law—the formation, in the inventor's mind, of a definite and permanent idea of the complete and operative invention—was written to capture this human moment.
AGI makes the human conception moment a legal fiction. When an AGI system generates a complete novel pharmaceutical compound, a new algorithm, or a detailed engineering design without direction more specific than the desired outcome, there is no moment of human conception. There is a human who typed the goal. There is a human who reviewed the output. There may be a human who selected among outputs. None of these is "conception" in the patent sense, and IP lawyers know it.
The DABUS decisions are correct as law and inadequate as policy. They resolve the edge case—can an AI be named as inventor?—while leaving the systemic question untouched: what happens when universal human inventorship attestation becomes universal legal performance of a known falsehood? A patent system in which every granted patent rests on a false conception claim does not distribute innovation rights. It provides legal cover for the companies running AGI systems to assert rights over AI-generated inventions while conducting, at scale, a fraud on the patent office that no individual applicant could get away with because no individual applicant's fraud would be universal enough to become invisible.
The inventorship fiction is more corrosive than the authorship void: it creates a system in which rights are actively asserted on false premises, and in which the false premises are too universal to challenge.
3.3 The Training Data Ratchet
AI-generated content, having no copyright owner in most jurisdictions, enters the public domain and flows freely into training corpora. This creates a feedback loop with two interconnected effects, both harmful.
The first is model collapse. Shumailov et al. [7] demonstrated that AI models trained on AI-generated data degrade systematically over generations: the tails of the distribution are lost, diversity contracts, and the model converges on a degraded mean. Not a theoretical concern; a demonstrated empirical result, though subsequent research (Gerstgrasser et al., 2024) finds that the worst collapse dynamics can be mitigated when training pipelines maintain access to sufficient quantities of fresh human-generated data. This conditionality is the point: the ratchet accelerates precisely as the proportion of AI-generated content rises and human-generated content becomes proportionally scarcer in training corpora. At AGI scale, where the volume of AI-generated content may exceed the volume of human-generated content within years, the training data ratchet risks becoming a model collapse ratchet unless active curation mechanisms are maintained — and those mechanisms depend on human-authored content retaining economic value sufficient to incentivize its continued production.
The second is the devaluation of human-authored content. Human creative work (novels, scientific papers, code, music, art) represents the deep training signal that made current AI systems capable. As AGI generates more content entering the public domain, human-authored content becomes comparatively scarcer in the training mix, but this scarcity does not translate into stronger IP protection. If anything, the ratchet reduces the economic incentive to create human-authored content (because it competes with free AI-generated output) while providing less IP protection for what is created (because it must compete for legal attention in a system overwhelmed with ownership questions).
The training data ratchet interacts most directly with the other four obstacles. The authorship void creates the public domain influx that feeds the ratchet. The incentive collapse (see 3.4) is accelerated by the ratchet's devaluation of human authorship. The compute concentration problem (see 3.5) determines who benefits from a corpus that includes all of human creative history plus the public-domain AI output generated from it.
3.4 The Incentive Collapse
IP law's three regimes rest on three distinct rationales, and all three fail at AGI scale, simultaneously rather than sequentially.
Copyright's rationale: incentivize creators by giving them time-limited control over their expression. At AGI scale, there are no creators to incentivize. This does not mean creative works stop being produced. It means the production of creative works no longer requires incentivizing the humans who would otherwise need to do the work. Copyright was designed to solve the public goods problem of creative production: without the legal monopoly, creators cannot recoup their investment. AGI eliminates the investment. Copyright's incentive rationale does not adapt to a world where the marginal cost of creative production approaches zero; it becomes inapplicable.
Patent's rationale: reward disclosure of novel inventions and advance the public's technological knowledge. Patent's quid pro quo—the inventor discloses the method in exchange for the time-limited monopoly, and the public gains the knowledge—presupposes that the disclosed method is comprehensible and usable. The USPTO's 2024 guidance on AI-assisted inventions [4] addresses inventorship specifically: each named inventor must satisfy the Pannu factors, meaning they must have made a significant contribution to the conception or reduction to practice of the claimed invention. Neural networks do not conceive in the Pannu sense; they produce outputs. But inventorship and disclosure are the same problem at depth: a system that does not conceive cannot disclose, and patent law's enablement doctrine separately requires that disclosures enable a person of ordinary skill in the art to practice the invention. Neural networks, at current scale, are not interpretable in the terms either requirement demands. A patent that discloses "we trained a large transformer on this dataset with these hyperparameters and it produced this output" does not transfer the knowledge that patent law's disclosure requirement is designed to transfer [6]. Patent's rationale survives at current capability levels because the AI-assisted inventions being patented still involve human-comprehensible methods. At AGI scale, where the inventive process is entirely contained within weights that no human understands, patent's disclosure rationale dissolves. The counterargument deserves acknowledgment: even non-enabling patents can sustain licensing markets that distribute innovation value commercially, trading on patent's exclusivity mechanism while abandoning its knowledge-transfer rationale. That is a coherent use of the patent system, but it is not patent's stated purpose, and it does not address the inventorship fiction that underlies every application.
Trade secret's rationale: protect competitive advantage earned through sustained human effort. Trade secret is the only IP regime that does not require human authorship or inventorship: it requires only that a secret have commercial value and be subject to reasonable efforts to maintain its secrecy. Model weights, training data, and the specific configurations of frontier AI systems are protectable as trade secrets under existing law, and the major AI companies treat them as such. The problem is not that trade secret fails to apply at AGI scale. Trade secret operates as designed—it protects the competitive advantage of those who developed valuable technology through sustained effort. The problem is that the conditions of its operation have changed in a way that decouples the mechanism from the rationale. Trade secret's rationale assumed a competitive landscape of many organizations investing in proprietary innovation; protection distributed across that landscape would stimulate competition and benefit the public through competing products. At AGI scale, where the capital requirements restrict frontier development to an oligopoly of frontier model providers, the same protection mechanism operates differently: it entrenches incumbency rather than rewarding effort and reduces rather than stimulates competitive pressure. Trade secret law has not inverted; it has been applied to a concentration structure for which it was not designed, and the public benefit it was meant to generate does not follow.
The simultaneous failure of all three rationales is the central analytic point in this paper. If copyright failed but patent survived, IP law could adapt. If patent failed but trade secret remained coherent, there would be something to work with. All three rationales collapsing at once suggests the issue is not in the specific design of any one IP regime but in the foundational premise they share: that creative and inventive value is generated by human cognitive effort, and that protecting that effort is the appropriate incentive mechanism. That premise is being falsified.
3.5 The Compute Concentration Problem
IP law is a system for distributing creative rights broadly. The underlying premise is not just that human authors and inventors deserve protection, but that creative and inventive activity is widely distributed, across millions of individuals, organizations, and institutions, and that granting each of them temporary protection funds the next round. The IP system is, among other things, a mechanism for preventing any single entity from monopolizing the production of creative or inventive output.
AGI inverts this mechanism. Frontier AI systems require capital expenditure on compute accessible to a small number of organizations globally — currently fewer than ten with meaningful frontier capability, and fewer than five at the leading edge. OpenAI's publicly discussed revenue ($2B/month, 900M weekly users [11]), combined with the capital requirements for training frontier models, illustrates a concentration with no historical precedent in the history of creative production. Gutenberg distributed printing. The internet distributed publishing. AGI concentrates the production of cognitive output in a smaller number of hands than any previous communications technology.
Applied to this concentration, IP law does not distribute rights. It entrenches them. A copyright system that assigns rights to platform operators (UK §9(3) model) or a trade secret system that protects model weights assigns the rights to the concentrated actors. The gap between IP law's stated purpose (distribute creative rights across society) and its actual effect (protect compute-owners' market position) has fully decoupled.
IP law cannot serve as the mechanism for addressing creative concentration at AGI scale, because its operation at that scale accelerates concentration rather than mitigating it. Any IP framework that assigns rights to operators functions as a subsidy to the oligopoly of frontier model providers that will operate AGI systems. Whether that is a desirable outcome is a policy question. It is not what IP law was designed to do, and it is not what its proponents claim it does.
4. Conclusions
4.1 Three Futures
The five obstacles in Section 3 are structural, not amenable to incremental legal adjustment in the way that, say, the patentability of software was amenable to adjustment. They reflect a mismatch between AGI's properties and IP law's foundational assumptions that cleverer statutes cannot resolve. What can be chosen is the political and institutional response to the mismatch. Three response trajectories are distinguishable.
Platform Capture. The default trajectory, requiring no legislation. The authorship void leaves AI output unprotected; platforms claim output by contract. Patents are filed with human inventor attestations; the fiction is universal and therefore functionally invisible. Trade secret protects model weights. IP law becomes increasingly vestigial as the real control mechanism—compute access—is not governed by IP law at all. This trajectory produces no reform, no commons, and no stability: it produces a period of doctrinal erosion followed by a renegotiation of IP norms under conditions in which the renegotiating parties are the platform companies that benefit from the current vacuum. The renegotiation may or may not produce law; the platform companies' preferences will shape whatever emerges.
Radical Expansion. The UK §9(3) model, globalized and extended to patents. Operators get copyright in AI-generated works; patents are amended to allow AI inventorship with human-operator accountability. Rights expand, but they expand to the concentrated actors. The result is creative feudalism: the operator of an AGI system holds the copyright on every novel, the patent on every invention, and the trade secret in every process the system generates. Radical expansion is the maximum concentration of IP rights ever achieved, dressed in the language of rights protection. It is also the most politically achievable future, because it requires only the extension of existing legal frameworks to new subjects, and because its beneficiaries have the resources to advocate effectively for it.
Commons by Necessity. AI training is governed by a licensing pool or levy system that compensates rights holders for the use of their work in training. AI output enters the public domain by default; no private IP in AGI-generated works. Human-plus-AI collaboration receives IP protection, conditioned on meaningful human creative contribution. Compensation mechanisms (creative levies, revenue-sharing requirements, retraining funds) address the displacement of human creators. This trajectory is most consistent with IP law's stated purposes. It is also least consistent with current political economy: it requires international treaty coordination across jurisdictions that are already fragmenting, and it requires the most resource-intensive actors in the AI ecosystem to accept constraints on their competitive advantage. A global IP commons treaty is close to impossible. It is also, arguably, what IP law's rationale demands.
4.2 Connections to Longer-Running Questions
This analysis intersects with questions the PNW AGI group has been working through for six years, and it would be dishonest to ignore those connections.
The Judgment thread established a distinction between reckoning—sophisticated pattern-matching—and judgment: contextually weighted decision-making in conditions of genuine uncertainty, with values and stakes involved. Copyright has always implicitly required the latter. The Feist standard—original selection reflecting the author's own intellectual creativity—is a description of judgment, not reckoning. If AGI output is reckoning on the group's framework, the authorship void may not be a legal accident. It may be the correct application of copyright's own logic to AGI's actual properties. Whether that conclusion is reassuring or troubling depends on how confident we are in the reckoning/judgment distinction as a line for legal purposes.
The Consciousness thread remains unresolved. Yampolskiy raised it in 2022: if AGI is conscious, copyright assignment from an AGI to its operator is not a property transaction; it is something closer to indentured labor, or to the legal instruments that used to govern creative output from enslaved people. The group has spent six years on IIT, predictive processing, and EM field theory without arriving at consensus on what consciousness requires. The IP question does not resolve that debate. It makes the debate urgent in a new way: the corporate governance of AGI will make implicit decisions about AGI moral status whether or not the philosophical question is settled, and those implicit decisions will be encoded in the ownership structures of IP law.
The Time thread is perhaps the most under-examined. Yampolskiy's impossibility results imply that AGI systems will iterate and evolve faster than any IP framework can track. Hugo Latapie's time-binding framework, developed across multiple PNW AGI Group sessions, offers a further implication: AGI is the ultimate accumulator of human creative history, a system that inherits all of human IP. The question of what the inheritor owes to the inherited is not answered by existing IP doctrine, which was written for a world in which inheritance of creative work was metaphorical rather than literal. At AGI iteration speeds, the concepts of "term," "duration," and "expiration," core to all three IP regimes, may be category errors rather than calibration problems.
4.3 The Question the Paper Cannot Answer
Yampolskiy's 2022 paper concluded that AGI is probably unownable and left the governance implications open. This paper concludes that IP's rationales dissolve simultaneously at AGI scale and leaves the normative implications open.
The open question is this: is the dissolution of IP rationales a governance failure to correct, or a revelation?
The governance failure reading holds that IP law was designed correctly for human creative production, that it will require adaptation for AGI-scale production, and that the right response is to develop new frameworks (new international instruments, new rights regimes, new compensation mechanisms) that preserve IP's purposes even when its current mechanisms fail. This reading treats the obstacles in Section 3 as problems to be solved.
The revelation reading holds that IP law was always built on a legal fiction—that ideas and expressions can be owned the way physical objects can be owned—and that the fiction was serviceable when creative production was expensive and required sustained human investment, but was always philosophically contested, always subject to the public goods problem, always in tension with the essentially communicative and cumulative nature of human knowledge. On this reading, AGI does not break IP law. It exposes what was always true: that creative output is an inherently commons activity, that the legal superstructure of ownership was imposed on it to solve an incentive problem, and that when the incentive problem dissolves, the ownership fiction becomes impossible to maintain.
Both readings are coherent. Both have implications the group has the background to examine. The paper does not adjudicate between them. It only insists that the question is now live.
Acknowledgments
This paper draws on research and discussion from the PNW AGI Group's 37 prior sessions, and on transcripts of public panel discussions by Dario Amodei, Demis Hassabis, and participants in the Stanford FutureLaw and Clause 8 forums. The framing is the author's own. The PNW AGI Group is a Pacific Northwest discussion group that has met continuously since 2020 on questions of AGI design, safety, and societal implications. Roman Yampolskiy's 2022 paper is the direct intellectual predecessor of this one.
References
[1] Amodei, D. and Hassabis, D., "The Day After AGI," panel discussion (moderated), 2026. YouTube: https://www.youtube.com/watch?v=mmKAnHz36v0 — Source for Amodei: "1–2 years," "engineers who say they don't write any code anymore." Source for Hassabis: "5–10 years," "not that far off."
[2] Hassabis, D., in "The Day After AGI," panel discussion (moderated), 2026. YouTube: https://www.youtube.com/watch?v=mmKAnHz36v0 — Source for Hassabis: "5–10 years," "not that far off"; transition likely to be abrupt rather than gradual.
[3] US Copyright Office, "Copyright and Artificial Intelligence," Policy Report, 2024–2025. Part 1: Digital Replicas (July 31, 2024). Part 2: Copyrightability of AI-Generated Outputs (January 29, 2025). Part 3: Generative AI Training Data (August 26, 2025). https://www.copyright.gov/ai/
[4] US Patent and Trademark Office, "Inventorship Guidance for AI-Assisted Inventions," 89 FR 10043 (Feb. 13, 2024). https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions
[5] DABUS cases: Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022) [US]; Thaler v. Comptroller-General, [2023] UKSC 49 [UK]; Commissioner of Patents v. Thaler, [2022] FCAFC 62 [AU]; EPO J 0008/20 (Dec. 21, 2021) [EU].
[6] Clause 8, "AI Patent Law and Inventorship: What Happens When AI Invents?", 2026. YouTube: https://www.youtube.com/watch?v=ZRUH2qM51Qg
[7] Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., and Gal, Y. "AI models collapse when trained on recursively generated data." Nature 631, 755–759 (2024). https://doi.org/10.1038/s41586-024-07566-y
[8] Stanford FutureLaw, "Generative AI and Intellectual Property," panel discussion, 2024. YouTube: https://www.youtube.com/watch?v=AT3IEgsC1dA
[9] Yampolskiy, R.V., "Ownability of AGI," Proceedings of AGI-22, Springer LNCS vol. 13539, 2023.
[10] IPWatchdog, "AI and Copyright: A Creator's Perspective," 2024. YouTube: https://www.youtube.com/watch?v=joDQJp-KM4g
[11] "OpenAI Policy Blueprint, $852B Valuation, and ChatGPT Unauthorized Practice of Law," video coverage, 2026. YouTube: https://www.youtube.com/watch?v=u9Azd3weYCY — Source for: Brockman "70–80% of the way to AGI"; OpenAI $852B valuation, $2B/month revenue, 900M weekly users; Nippon Life ChatGPT lawsuit (March 2026).
[12] Marcus, G., "The Grand AGI Delusion," keynote address, Royal Society, October 2025. YouTube: https://www.youtube.com/watch?v=s-qKiBjabY0 — Source for: "deeply flawed imitators"; critique of scaling laws as empirical observations, not physical laws. See also Marcus, G., "'Scale Is All You Need' is dead," Substack, December 2025. https://garymarcus.substack.com/
[13] Yampolskiy, R.V., "Unownability of AI: Why Legal Ownership of Artificial Intelligence is Hard," unpublished manuscript, 2022. https://philarchive.org/rec/YAMUOA-2