Call for Participation

Image licensed under Creative Commons license. Credit: Inkey. Image source: Wikimedia Commons

Call for Participation

Artificial intelligence has revolutionized persuasive communication through the emergence of synthetic media—hyper-realistic AI-generated content that challenges the boundaries of trust, authenticity, and ethics. These tools enable the creation of immersive and compelling messages with unprecedented reach and sophistication, but they also raise significant concerns about manipulation and misinformation. Central to addressing these challenges is the concept of AI disclosure, which involves explicitly identifying AI-generated content. Disclosure has the potential to reshape audience perceptions: it can normalize AI use and foster trust, yet it may also provoke skepticism or reduce engagement. This duality underscores the importance of context-specific approaches – whether in healthcare, education, marketing, or political communication – that align ethical transparency with the practical demands of persuasion.

This workshop will delve into the complex interplay between disclosure practices, audience trust, and domain-specific applications, aiming to develop frameworks that address both the opportunities and challenges posed by synthetic media.

Workshop Objectives

1. Investigate Audience Perception:Analyze how the characteristics of AI disclosure shape cognitive processing, emotional responses, and trust in persuasive messages.

2. Discuss Contextual Variability: Examine the influence of user attitudes (e.g., political ideology, cultural values, and technology acceptance) on their engagement with disclosed AI content across domains.

3. Bridge Theory and Practice: Develop actionable theoretical and methodological tools for ethical AI disclosure that balance transparency with effective persuasion.

4. Explore Cross-Domain Applications: Identify best practices for implementing AI disclosure in diverse sectors, including healthcare, education, marketing, and political communication.

5. Foster Industry Collaboration: Create practical guidelines for integrating AI disclosure into the workflows of content creators, regulators, and developers, promoting trust and accountability.

Submission Types

We welcome submissions that engage with the concept of disclosure as a form of ethical AI practice, in broad domains and contexts. Submissions will be reviewed by the workshop program committee based on relevance, innovation, and alignment with the workshop’s objectives. Participants are also welcome to apply for participation in the workshop without presenting papers or organizing hands-on sessions. 

To present your work or lead a hands-on session at the workshop please submit your proposal in one of the following formats:

Research papers (up to 6 pages) focusing on theoretical insights or empirical findings.

Work-in-progress papers (up to 4 pages) showcasing emerging ideas or preliminary findings.

Hands-on session proposals (up to 4 pages), designed for interactive, experiential learning activities.

All submissions should be formatted according to Springer LNCS format: https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines

For participation as non-presenting participants please contact the organizers:  hilalow@gmail.com or nilisteinfeld@gmail.com

Important Dates

Submission deadline: March 1, 2025 (AoE)

Decision notification: March 15, 2025 

Camera ready: March 28, 2025 

Workshop date: May 5th, 2025.

Image by Inkey. Image source: Wikimedia Commons