Location: Home » Academics » News&Events
News&Events
Summary of the 24th Online Conference of the Qiyou Forum
Release time:2024-12-16     Views:


The 24th conference of the Qiyou Forum was successfully conducted online on February 1, 2024. This session's theme was "Legal Challenges and Responses in the Era of Generative Artificial Intelligence." The event drew participation from over 60 academics and students across various universities.

The Qiyou Forum was established by the Cyberspace Governance Research Center of Shanghai Jiao Tong University.

This particular session was organized by the Law School of Beijing Normal University.

Esteemed scholars, Professor Shou Bu, who serves as the Director of the Cyberspace Governance Research Center at Shanghai Jiao Tong University and the Director of the Artificial Intelligence Governance Research Center at the University of Science and Technology of China, and Professor Xue Hong from the Law School of Beijing Normal University, delivered keynote addresses during the opening ceremony. The proceedings were moderated by Zhang Weichen, a doctoral candidate from the Law School of Beijing Normal University.



Shou Bu

Professor Shou Bu expressed his gratitude to Professor Xue Hong and her team for their hard work in organizing this session, as well as to all the speakers and teachers for their strong support. Secondly, Professor Shou Bu introduced the origin of the Qiyou Forum, mentioning that "Qiyou" corresponds to "NONE" and using "Qiyou" as the name of the forum to express its purpose and expectations for postdoctoral, doctoral, and master's students' bright future. Finally, Professor Shou Bu shared his understanding of research methods for AI legal issues from four aspects: firstly, AI legal researchers should have as much knowledge as possible about AI science and technology. Secondly, researchers on legal issues related to artificial intelligence should have as much knowledge as possible about AI philosophy research. Thirdly, researchers on legal issues related to artificial intelligence should have as much knowledge as possible about AI ethics research. Fourthly, researchers on artificial intelligence legal issues should have as much knowledge as possible about foreign AI legal literature. We need to focus on solving the "three barriers", namely "foreign language barrier", "technical barrier", and "Chinese barrier". Professor Shou Bu proposed that legal research on artificial intelligence should follow the path of "technology → philosophy → ethics → law". He also hopes that everyone can learn from each other through the forum, enhance friendship, work together, and further promote academic research.



Xue Hong

In her inaugural address, Professor Xue Hong extended a warm welcome and deepest gratitude to all the forum's attendees. She expressed her appreciation to Professor Shou Bu for his confidence and also recognized the efforts of all the students involved in orchestrating this conference. Touching upon the legal dimensions of artificial intelligence, Professor Xue Hong elaborated on her insights from three perspectives: Firstly, she highlighted the swift advancement of artificial intelligence technology and the transformative impact that general artificial intelligence is poised to have on knowledge production, management, and dissemination. She underscored her hope that young scholars would sustain their enthusiasm for exploring innovative, cutting-edge topics, thereby tapping into their research potential. Secondly, she advocated for the preservation of the confrontational and辩论 nature of knowledge through academic exchange, aiming to construct a shared future for the academic community. Thirdly, she emphasized the need for research in intellectual property law, internet law, and technology law to effectively embrace internationalization and globalization. She called for continuous dialogue, exchange, and collaboration between domestic and international scholars to yield fruitful outcomes.

The session's moderator, Zhang Weichen, thanked the two distinguished speakers for their heartfelt presentations and valuable guidance, and urged everyone present to engage actively in discussion and communication, thereby igniting a spark of ideas.

The keynote speech of this forum was hosted by Yang Lingli, a doctoral candidate from the Law School of Beijing Normal University.

Yang Lingli expressed her gratitude to the two professors for their strong support, encouragement, and trust. She also thanked them for providing everyone with this valuable opportunity to exchange ideas on the forum. She wished the forum a complete success and that everyone would gain something from it.



Wu Jia

Wu Jia, a doctoral candidate at the Law School of Beijing Normal University, presented her report under the title, "From Data-Based to Behavior-Based: The Choice of Generative Artificial Intelligence Authenticity Assurance Model."

Wu Jia commenced her presentation by providing a succinct overview of the global legislative landscape, illustrating the varying degrees of legal regulation surrounding generative artificial intelligence across different world regions. She focused particularly on a comparative analysis of the legislative frameworks in the United States and the European Union. Her discourse was structured into four key sections.

In the first section, Wu Jia delved into the model illusions and potential risks associated with generative artificial intelligence. She began by elucidating the technical principles underpinning generative AI, with a keen focus on the analysis of large-scale language modeling techniques. She then delineated two categories of model hallucinations: closed-domain and open-domain. Concluding this section, she assessed technological risks at the individual, national, and societal levels, providing compelling evidence through case studies of misinformation dissemination.

The second section addressed the regulatory model centered on data authenticity. Wu Jia introduced the regulatory logic, characteristics, and the challenges and risks identified in the "Management Measures for Generative Artificial Intelligence Services (Draft for Comments)" released in April 2023. To ensure data authenticity, the draft adopts a pre-regulatory approach, mandating the authenticity of training data on the input side, the authenticity of content on the output side, and model optimization training on the technical side. The latent problems and risks discussed included the tension between authenticity and the "creativity" of generative technology, the technical hurdles in monitoring and enforcing data authenticity principles, and the high compliance costs that could stifle industrial innovation.

In the third section, Wu Jia explored the regulatory model centered on behavioral responsibility, using the "Interim Measures for the Management of Generative Artificial Intelligence Services" released in July 2023 as a reference. She highlighted the superiority of the behaviorally focused regulatory model in terms of regulatory logic, characteristics, and advantages. This legal norm's regulatory logic is instrumental, reducing the emphasis on data authenticity in favor of strengthened oversight of user misconduct and enhanced obligations for risk disclosure and legal guidance. The strength of this model lies in its clear delineation of responsibilities between individual users and service provider platforms, isolating the accountability for the "generation" and "dissemination" of false and harmful information. By shifting the focus to user behavior rather than the generative technology itself, it mitigates the joint liability of supporters and service providers of generative AI technology.

The final section of Wu Jia's presentation advocated for an inclusive regulatory approach that balances development with security. She noted that the "Interim Measures" propose the principle of "balancing development and security, promoting innovation while integrating legal governance." The regulatory shift from data-based to behavior-based approaches emphasizes high-risk application behaviors over the generative model itself, thus appropriately reducing the technical compliance burden on technology supporters and service providers. For the inherent risks of generative technology, a measured and cautious stance should be adopted in the industry's infancy, allowing for certain exemptions to foster technological innovation and advancement. To address the reliability issues and model biases resulting from illusions, strategies such as enhancing data quality, establishing data review and verification processes, optimizing model design and training, and incorporating user feedback can be employed to improve the accuracy and reliability of generated content. While achieving absolute authenticity remains an ongoing R&D endeavor, the current regulatory model centered on behavioral responsibility is seen as conducive to striking a balance between development and security in information governance, progressively integrating governance insights into practice and steering the generative artificial intelligence toward a path of robust and healthy development.



Bao Yiming

Bao Yiming, a doctoral candidate at the School of Law, Xi'an Jiaotong University, presented a comprehensive analysis on the subject of "Technological Security Risks and Prevention of Generative Artificial Intelligence," which was structured around four key dimensions.

Firstly, Bao Yiming addressed the rise and implications of generative artificial intelligence. She highlighted the technology's swift advancement, broad user base, and the potent algorithms and computational power that drive it. As a pivotal force in technological transformation and industrial empowerment, generative AI holds vast potential due to its exceptional natural language processing capabilities.

Secondly, the presentation delved into the technological security risks posed by generative AI. Bao Yiming categorized these risks into five distinct areas: national security concerns like digital dominance and technological resource monopolies; algorithmic security issues such as black box algorithms and discrimination; the ethical risk of diminished autonomous decision-making in human-computer interactions; the risk of data and intellectual property infringement during cross-border data flows; and other miscellaneous risks. She further examined the societal impact of generative AI, focusing on the evolving human-machine and interpersonal dynamics. The shift from a simple "otherness" relationship to a "human-machine coupling" relationship was discussed, as was the transition of generative AI from a tool to a more complex, humanoid entity. The potential for over-reliance on generative AI for information and the challenges in discerning information quality were also explored, leading to an unequal "interaction" in human-computer relations. Additionally, the ethical risks of generative AI were considered, including the potential decline in human autonomous decision-making and the technology's inherent lack of ethical judgment and emotional expression.

Thirdly, Bao Yiming tackled the challenge of algorithmic bias in deep learning technology, emphasizing the difficulty of overcoming issues such as algorithmic black boxes and discrimination.

The third point addressed the sources of technological security risks associated with generative AI. Bao Yiming classified the risk types into four categories: personal information, content security, model security, and intellectual property, detailing the causes of each risk type at various stages of the AI development process, including unsupervised pre-training, supervised fine-tuning, reinforcement learning, and the generation stage.

Finally, the presentation outlined prevention measures for the technological security risks of generative AI. Bao Yiming proposed the establishment of a multi-agent collaborative governance system, recognizing the need for an agile governance mechanism to address uncertainties in technological governance. She advocated for government agencies to actively engage in agile governance, regulate lawfully, and conduct regular risk assessments and safety inspections. The technology industry was called upon to uphold compliance in technology governance and risk management. Social organizations, such as industry associations and research institutes, were encouraged to promote soft law collaborative governance. Social users were also encouraged to participate in governance through public opinion oversight and by seeking clarifications on technology applications.

In conclusion, Bao Yiming offered her insights on the collaboration between artists and AI in the creative process, the nature of AI-generated works, the deconstruction of the "thought expression" dichotomy, and a reevaluation of the concept of substantive similarity from a perspective that balances interests.



Gu Lingyun

Gu Lingyun, a doctoral candidate at Peking University Law School, delivered a thought-provoking report entitled "Reflections on the Concepts of 'Author's Death' and 'Disillusionment of Works' – Protection Logic of Artificial Intelligence Generated Products Based on Industrial Policies," which was organized around four key themes.

Firstly, Gu Lingyun addressed the varying attitudes toward complex cases. The report commenced with an analysis of multiple instances where the U.S. Copyright Office declined to register AI-generated content (AIGC) as works, shedding light on the American stance on the protection of AI products. Similarly, the current Chinese perspective was illuminated through the examination of cases such as the Shenzhen Nanshan Dreamwriter case, the Beihu Feilin case, and the "Pictures Generated by AI from Text" copyright dispute. Additionally, the report outlined the arguments against granting intellectual property rights to AI generators, exploring the perspectives of "Is the author dead?" and "Is the work disillusioned?" The former includes viewpoints like "AI is not human and cannot be an author," "The author cannot exert weak control over AI-generated content," and "AI-generated products may be controlled but lack an author." The latter addresses issues such as failing to meet creativity criteria, the unpredictability of AI-generated content, and the notion that without an author, the content cannot be considered a work.

Secondly, Gu Lingyun reevaluated the concept of "the authors death," suggesting a return to the realm of creative possibilities. Drawing on the work of Roland Barthes, a master of structuralism, the report delved into the two dimensions of "the authors death": one that challenges the author's authority and traditional views on authorship, and another that questions the central and ultimate meaning of the work. The report highlighted the significance of Barthes' theory, both in freeing the text from the author's singular meaning and in the potential extremity of completely denying the text's stability and the commonality of readers' responses. Gu Lingyun reflected on the historical context and philosophical influences behind the phrase "the author is dead," emphasizing the importance of textual interpretation and the enduring value of works with an authorial identity in the era of artificial intelligence.

Thirdly, the report reframed the concept of "disillusionment of works," examining the narrative logic through the lens of industrial policy incentives. Gu Lingyun presented current viewpoints on AI-generated content, such as the idea that without human creativity, AI-generated content does not constitute a work. The report then argued that the judgment of a work ultimately hinges on the subjective concept of originality, which cannot be rigidly defined by legislation. The analysis included discussions on predictability, control, and the "thought-style-expression" binary to ternary model. Gu Lingyun offered insights on these existing methods, considering issues like the mapping between natural language and machine language, the role of prompt words, and the evolving nature of style in copyright law.

Fourthly, the report clarified a range of issues related to the protection of AI-generated products under industrial policies. Gu Lingyun compared the U.S. "Executive Order on Artificial Intelligence" with China's "Global Artificial Intelligence Governance Initiative" across six dimensions: value orientation, governance goals, model security, personal information, algorithmic fairness, and international cooperation. The report discussed the intermittent failure of motivation theory and argued for incentivizing works that contribute to market dynamics and maintain competitive advantage. Gu Lingyun analyzed the implications of not empowering versus empowering AI-generated products, advocating for a balanced approach that considers both the industrialization of AI-generated content and the maintenance of a stable market transaction order.

In conclusion, Gu Lingyun's report provided a comprehensive and nuanced exploration of the challenges and opportunities presented by AI-generated content within the framework of intellectual property law and industrial policy.

The lively discussion and interactive exchange session was moderated by Zhang Weichen, a doctoral candidate at the Law School of Beijing Normal University. Faculty members and a cohort of doctoral students engaged in a thorough evaluation and robust discussion during the event.



Sun Shan

Associate Professor Sun Shan from the School of Civil and Commercial Law at Southwest University of Political Science and Law began by expressing his gratitude for the invitation to speak at the forum. He then delved into several key perspectives on the legal implications and scholarly research surrounding generative artificial intelligence. His insights were as follows:

Firstly, he emphasized the need for precision in the use of terminology, questioning whether objects created by artificial intelligence are synonymous with those generated by generative AI, and whether they should be classified as objects or content. He noted that the distinction between broad-scope and narrow-scope AI is not a matter of technological advancement but rather reflects different areas of application. Professor Sun urged the audience to broaden their knowledge in technology, linguistics, and other fields.

Secondly, he highlighted the distinct underlying logic of AI-generated content compared to human creativity, prompting a consideration of whether such content should be excluded from copyright protection or afforded a unique form of safeguard.

Thirdly, he queried the nature of AI's creative output, asking whether it encompasses high-quality intellectual crystallization and whether copyright law should protect intellectual achievements or the abstract human essence.

Fourthly, he proposed the design of discriminative experiments to assess the reasonableness of the selection space's size and the randomness of the generated content, questioning whether judgments of independence should be limited to surface-level formalities or delve into the underlying ideas.

Fifthly, he pondered the naturalness and reasonableness of adhering to the theoretical assumptions of the author rights system, and whether a pragmatic approach to problem-solving necessarily leads to suboptimal outcomes.

In conclusion, Associate Professor Sun Shan summarized that in the realm of protecting AI-generated content, the challenge of legal drafting is inevitable. He advocated for a rational approach to legal drafting, affirming its legitimacy while focusing on the essence of generative AI technology and its corresponding industrial impacts. He stressed the importance of aligning with industrial logic and regulating industrial behavior accordingly.



Zuo Ziyu

Zuo Ziyu (lecturer at the Law School of Sichuan Normal University) first expressed great gratitude for being invited to participate in the discussion session and affirmed the content presentation of the aforementioned sharer. She shared her views on the legal issues of generative artificial intelligence: firstly, whether the products generated by artificial intelligence should be protected by copyright law. Originality was initially a concept in aesthetics, but when it entered the field of law, it gained legal normative significance. Willpower theory is the key to addressing this issue. In relevant legal research, both anthropomorphic voluntarism and limited personality have not deviated from the scope of voluntarism. Secondly, the scheme of determining works based on consequentialism should be approached with caution. The recognition of consequentialism is a significant transformation of the overall functioning of society, conveying a value orientation. Therefore, defining creativity solely based on form and outcome can easily lead to extreme pursuit of interests and unfairness towards individuals. Thirdly, it is feasible to address the issue of generative artificial intelligence from the perspectives of industrial policy, balance of interests, and legal pragmatism.



Jin Yulu

Dr. Jin Yulu, a postdoctoral fellow at Tsinghua University Law School, individually assessed the contributions of the three presenters. She contends that the topics selected by the doctoral candidates hold considerable practical relevance and exhibit an innovative ethos. With further refinement and polishing, their work has the potential to evolve into significant academic contributions. Dr. Jin suggested that when examining the technological security risks associated with generative artificial intelligence, greater emphasis could be placed on certain arguments. She asserted that the exploration of legal issues through the lens of generative AI demonstrates profound theoretical insight, and she advocates for more comprehensive and nuanced research in this field moving forward.



Huang Li

Huang Li, a doctoral candidate at the School of Law, Shanghai Jiao Tong University, commenced by tracing the origins and evolution of the Qiyou Forum, expressing her optimism for its continued growth and refinement. She extended her gratitude to the conference organizers and encouraged more young academics to participate in future dialogues and scholarly exchanges. Additionally, Huang Li underscored the necessity for more intensive research into the practical aspects of generative artificial intelligence technology to overcome the impasse that restricts discussions on legal standards. In her closing remarks, she shared insights from her investigation into the training data phase of generative AI. Huang presented a series of domestic and international copyright infringement cases involving generative AI data training, underscoring the immediacy and relevance of ongoing research. She provided a succinct analysis of the current Chinese legal standards for data sourcing as outlined in the "Interim Measures for the Management of Generative Artificial Intelligence Services," and identified a range of challenges confronting China, including the legitimacy of data provenance, the presence of illegal components, commercial utilization, and the high cost of intellectual property compliance reviews. Huang also offered a brief overview of the regulatory frameworks concerning the legality of data training in the United States, the European Union, and Japan.



Li Anyang

Li Anyang, a doctoral candidate at Shanghai Jiao Tong University Law School, commenced by assessing the presentations of the three speakers. Addressing the research of Wu Jia on "machine illusion," Li Anyang contended that machine hallucinations are not solely a matter of data quality; they are also significantly influenced by data quantity. To ensure data authenticity and promote fairness and diversity in data sourcing, as well as the rationality of data sampling, careful consideration must be given. Li Anyang noted that the data processing phase—comprising tasks like cleaning and labeling—is intricate and involves numerous stakeholders, including outsourced labor, where the hands-on data work is often carried out by frontline employees. This results in an extended chain of responsibility for the operational process, making the attribution of responsibility particularly complex. Turning to Bao Yiming's research on "agile governance," Li Anyang observed that this concept is an import from management studies. The challenge, Li Anyang suggested, lies in how to conduct thorough research and craft nuanced regulations within the legal domain, which necessitates flexibility but also introduces an element of uncertainty. Regarding Gu Lingyun's research, Li Anyang concurred that the uncertainty of back-end processes should not preclude the investigation of front-end issues. The crux of researching the protection of AI-generated products, Li Anyang emphasized, is the significance of texts with authorial identity. Li Anyang then posited that the concept of testing for human control over AI is scientifically flawed, as it is subject to a variety of influences, including technological equipment and instructional nuances, leading to inherently unstable outcomes. Therefore, Li Anyang argued, it is imperative to delve into the technical underpinnings of generative AI, such as deep learning, to explore the nature of technologies like deep learning and to examine the existence of subjective qualification and free will. In conclusion, Li Anyang highlighted that current legal research on generative AI technology should aim to establish specialized procedural regulations based on substantive research into subject qualification, thereby delineating the rights and obligations of the relevant parties involved.

During the Q&A session, Li Yan asked, "How do we view the impact of artificial intelligence generated content on human cognition and creativity?" Lecturer Zuo Ziyu (from the Law School of Sichuan Normal University) provided answers from the perspectives of creativity and cognition. At the creative level, the impact of AI generated content on humans can be explained by the relationship between the author and the work, as well as the recognition of originality. The concept of a work incorporates many aesthetic elements as constituent elements, which is itself an impact on the concept of creation, such as whether creation only recognizes the result or expands into the process. At the cognitive level, generative artificial intelligence can lead to "algorithmic black boxes" and "information cocoons", affecting people's access to knowledge, causing their thinking and insights to become narrow, and ultimately affecting their innovation.



Hao Mingying

The concluding summary session was moderated by Zhang Weichen, a PhD candidate at the Law School of Beijing Normal University. Hao Mingying, a lecturer at the School of Civil, Commercial and Economic Law of China University of Political Science and Law, delivered the closing address. Hao Mingying began by inspiring young scholars to strive for "unparalleled" excellence and commended the presentations of all participants, asserting that research in the field of generative artificial intelligence law intersects with issues in science and technology, philosophy, ethics, economics, and political science, necessitating the development of interdisciplinary thinking frameworks. Furthermore, she emphasized the need for an industrial perspective when addressing legal issues arising from artificial intelligence, ensuring that the law supports industrial progress and fosters market vitality. Lastly, she suggested that the realm of intellectual property could be investigated through the lens of content generation by AI, calling for in-depth exploration into the acquisition and utilization of data, the legitimacy and rationale of using copyrighted material, interest balancing, the distinction between creative and non-creative use, and the differentiation between general and specialized AI during the data input and training phases. In the data output phase, she highlighted the importance of considering whether the generated content qualifies as a work, its entitlement to intellectual property protection, and the matter of infringement related to AI-generated content. To close the meeting, Hao Mingying shared a passage crafted with Chat GPT:

Faced with the burgeoning growth of generative artificial intelligence, our forum today has delved into the ethical, legal, and intellectual property challenges it presents. Consequently, the legal system must evolve to foster a fair, transparent, and secure environment for the application of generative AI. In the realm of intellectual property, defining the creativity of AI-generated content, determining the protection and ownership rights of AI creations will emerge as pressing issues that demand our attention. We champion interdisciplinary research and collaboration, aiming to establish a robust framework through theoretical innovation and case studies. To guarantee the sustainable advancement of generative AI and proactively tackle the societal, legal, and ethical dilemmas it spawns, let us join forces to steer it towards a future marked by prosperity and harmony through ongoing academic dialogue and regulatory breakthroughs.