Not Just Adapting, Leading: Advancing AI Research in Professional Communication

Table of Contents

While there is a growing body of scholarship on AI, there are many questions that need to be addressed through further study. We provide a set of suggested areas of research and research questions that the ProComm membership have identified as opportunities for research.

Back to IEEE ProComm or the ProComm AI main page

Introduction

The integration of Artificial Intelligence (AI) into professional communication represents a paradigm shift, moving beyond incremental enhancements to fundamentally altering how technical and scientific information is created, managed, delivered, and consumed. For the IEEE Professional Communication Society (ProComm) community, understanding this evolving landscape is critical.

ProComm issues this call to the academic, professional, and technical communication communities:

While AI has garnered growing scholarly and professional attention, much of the current research remains rooted in early-stage explorations, often focused on basic interactions or framed around AI as a threat, a novelty, or an inevitable evolution. We recognize and value the foundations that have been laid. Yet, at this pivotal moment, we call for more ambitious, targeted, and forward-looking research that reflects the complexity, urgency, and responsibility that the integration of AI demands.

Generative AI, large language models (LLMs), domain-specific AI applications, and automated systems are now embedded in communication practices at every level. These systems are not rule-bound artifacts; they are powered by natural language processing (NLP) and machine learning (ML) technologies that learn from vast datasets to predict, generate, classify, and adapt language dynamically. This shift opens new possibilities for dynamic content assembly, intelligent personalization, and scalable communication support. At the same time, they create new risks, including bias amplification, hallucinated information, model drift, and transparency challenges.

Professional communicators must understand not only the user-facing behaviors of AI systems, but also the technologies and systemic behaviors that shape communication outcomes. Effective research must continue to address how these information processing content systems influence audience experience, ethical obligations, content governance, accessibility, equity, and long-term risk management.

This document outlines a series of topic areas and research questions designed not simply to extend existing knowledge, but to challenge assumptions, anticipate consequences, and lead proactive conversations about the role of AI in professional communication. While some of these topics have been discussed in recent articles and presentations, we believe there is still significant need to explore these topics with additional, rigorous and forward looking research.

We invite projects that work from a firm foundation in communication practices, technological understanding, and scholarly rigor. Projects on AI and professional communication are essential to move beyond awareness and cautious experimentation to craft the future of our field with vision, responsibility, and courage.


AI and Audience-Centered Design

Audience-centered design has always been a cornerstone of effective professional communication. The rise of AI-driven communication systems, from language models to adaptive interfaces, dramatically reshapes the relationship between creators, users, and information systems. To maintain the values of clarity, equity, and usability, professional communicators must interrogate how AI technologies engage diverse audiences, amplify or mitigate biases, and alter user expectations.

This area of inquiry demands attention to both how audiences interact with AI and how AI systems themselves can be designed to serve diverse needs responsibly.

Research in this domain must move beyond technical performance to address trust, empowerment, accessibility, and cultural responsiveness.

Understanding Audience Interaction with AI

  • How do users from different linguistic, cultural, and professional backgrounds interact with AI-driven communication tools?
  • How do multilingual students and professionals leverage AI, and what barriers or inequities emerge?
  • How does the integration of AI in content creation affect user trust and engagement, particularly among marginalized audiences? (For instance, do users feel more empowered or more alienated when AI is involved in creating personalized messages?)

Co-Designing Inclusive AI Systems

  • How can AI systems be co-designed with diverse user groups, especially people with visual, cognitive, physical, or linguistic disabilities, to ensure solutions genuinely meet communication needs and avoid reinforcing bias?

Personalization and Bias Mitigation

  • In what ways can AI systems personalize content (e.g., tone, reading level, modality) for different audiences without reinforcing harmful stereotypes or biases?
  • What new forms of audience analysis and adaptation are needed to address the complexities introduced by AI-mediated communication environments?

Ethical Perceptions and User Values

  • How do different user groups (students, educators, practitioners) define “ethical AI use” in communication contexts?
  • How do perceptions of ethical AI use vary across linguistic, cultural, and professional boundaries, and what tensions emerge from these differences?

Measuring and Improving Audience Experience

  • What metrics and evaluation methods should be developed to assess audience experience, accessibility, and inclusivity in AI-mediated communication?
     (E.g., user satisfaction, trust, perceived empowerment, accessibility compliance, comprehension outcomes.)

Ethics, Responsibility, and Stewardship in AI Use

Ethical practice has always been central to professional communication. The emergence of AI technologies amplifies the stakes with the capacity to automate, mediate, and transform communication processes at scale. Professional communicators must engage with AI not simply as users, but as stewards of transparency, accountability, fairness, and inclusivity.

Navigating AI’s ethical landscape requires developing practical frameworks, scrutinizing systemic biases, balancing innovation with risk, and advocating for responsible governance at organizational and societal levels.

This research area calls for actionable, grounded work that bridges high-level ethical principles and the real-world decisions communicators must make. Such often occurs in high-stakes, regulated environments where accuracy, trust, and human dignity are non-negotiable.

Frameworks and Governance Structures

  • What frameworks, heuristics, and decision-making models can guide ethical AI use in professional communication?
  • How can organizations effectively translate high-level AI ethics principles into concrete practices and workflows?
     (E.g., What internal review processes, checklists, or audit practices ensure that principles like transparency and fairness are operationalized in content creation or decision-making?)
  • What are the key components of AI governance frameworks (e.g., ethics review boards, mandatory bias detection tools, disclosure policies) that technical communicators perceive as most effective in promoting accountability?

Transparency, Accountability, and Explainability

  • How should transparency, accountability, and fairness be defined and operationalized when communication involves AI-generated content?
  • What are the best strategies for auditing and explaining AI system decisions to stakeholders and the public? (This includes communicating complex algorithmic processes in understandable, accurate ways and assigning responsibility for errors.)
    In cases of AI-driven content or decisions that cause harm, how should responsibility be assigned and enforced? (E.g., Does accountability rest with developers, deploying organizations, or those overseeing AI workflows?)

Ethical Practices in Regulated and High-Stakes Domains

  • In traditionally regulated industries (e.g., healthcare, finance), what validation processes (e.g., SME review checklists, simulated use testing, comparative analysis) do technical communicators use to mitigate risks associated with AI-generated inaccuracies?
     How do these processes compare in effectiveness (e.g., measured by error rates, usability testing)?
  • How do technical communicators in high-risk, accuracy-dependent environments navigate the tension between AI-driven efficiency and the ethical imperative for absolute accuracy and bias avoidance, particularly regarding forward-looking statements or critical reporting?
  • What undocumented heuristics or tacit knowledge are practitioners relying on when validating or editing AI-assisted communication products?

Ownership, Creativity, and Co-Authoring Ethics

  • How does the level of AI contribution (e.g., generating an outline vs. drafting full sections vs. suggesting rephrasings) in a co-authoring workflow influence technical communicators’ reported sense of authorship, creative satisfaction, and perceived quality of the final output?

Equity, Bias, and Access

  • How are AI tools supporting, or failing, users with visual, cognitive, physical, or linguistic disabilities?
  • What systemic biases are amplified or mitigated by current AI systems, and what interventions can professional communicators design or advocate for to improve equity?
  • How can open-source and publicly available AI tools be leveraged to expand access and mitigate inequities, rather than deepen digital divides?

Global Stewardship and Standardization

  • How can we address the tension between protecting data privacy and ensuring that diverse datasets are available to build inclusive, unbiased AI systems?
  • To what extent can ethical AI stewardship be standardized globally, and what role should professional societies (like IEEE ProComm) play in shaping international norms and fostering cross-cultural ethical alignment?

Content Strategy, Knowledge Management, and AI

The rise of AI technologies demands a reimagining of content strategy and knowledge management practices in professional communication. Where traditional approaches emphasized structured creation, maintenance, and reuse of content assets, AI introduces dynamic, generative systems that can create, adapt, and repurpose information at unprecedented scales.

Professional communicators are increasingly responsible not just for writing or organizing content, but for designing, governing, and maintaining entire AI-augmented knowledge ecosystems, ensuring that information remains accurate, accessible, ethical, and strategically aligned across its full lifecycle.

Research in this domain must explore how AI reshapes content strategy foundations: audience targeting, personalization, governance, reuse, versioning, information retrieval, and content performance measurement.

AI Integration into Content Ecosystems

  • How are AI systems and their constituent components (including language models, retrieval-augmented generation systems, and adaptive content engines) being integrated into CMSs, knowledge bases, and analytics platforms?
  • What new roles must professional communicators and content strategists assume in curating, validating, and governing AI-driven content ecosystems?

Domain-Specific Models and Knowledge Integrity

  • What risks and best practices emerge when domain-specific models or datasets are trained to support technical and professional communication tasks?  (E.g., How do smaller, specialized models affect content accuracy, terminology control, regulatory compliance, and brand voice?)
  • How can communicators adapt traditional content governance practices to AI-driven content creation and retrieval environments? (e.g. controlled vocabularies, style guides, and metadata strategies)

Early Assessment, Content Quality, and AI Evolution

  • How can professional communicators develop early evaluation frameworks to assess new AI capabilities for content generation, personalization, and management?
  • What indicators (e.g., consistency, tone alignment, domain accuracy, audience fit) should be prioritized in evaluating AI-generated content within evolving technical ecosystems?

Rethinking Reuse and the Knowledge Lifecycle

  • How does AI-driven reuse differ from traditional modular content reuse strategies (e.g., DITA, content chunking)? What are the social and policy implications of the similarities and differences in reuse?
  • What new models of content lifecycle management are needed to govern AI-mediated knowledge ecosystems, ensuring that content creation, curation, revision, and retirement remain transparent, traceable, and strategically aligned?
  • How can communicators ensure content trustworthiness, traceability, and quality assurance when AI models synthesize, rephrase, or adapt existing knowledge assets?

Innovation, Technology Development, and Communication Practice

Professional communicators do not merely respond to technological innovation, they are participants in how technologies are understood, adopted, and integrated into practice. The rapid evolution of AI-driven systems, from content generation to task automation to agentic collaboration, has created a demand for communicators that actively engage in technology development, interface design, and workflow innovation.

By embedding communication principles such as clarity, accuracy, inclusivity, and usability into emerging AI tools and standards, professional communicators can ensure that innovation serves human needs rather than undermining them.

Influence of AI on Communication Workflows

  • How are engineering, technical, and communication fields adopting AI-driven workflows, and how is communication quality affected?
  • What are the most effective human–AI collaboration models for content creation and review in professional settings? (E.g., Should AI draft and humans edit? Should humans draft and AI revise? How do different workflows impact quality, efficiency, and writer satisfaction?)

Accuracy, Verification, and Risk Management

  • What are the best strategies for ensuring the accuracy, consistency, and trustworthiness of information in AI-assisted communication, especially in critical technical documentation?
  • How can organizations safeguard sensitive information while benefiting from AI systems that require access to large content repositories? (What models balance data security, IP protection, and operational efficiency?)

Audience Perception and Transparency

  • How does the use of AI-generated content impact audience trust, engagement, and credibility perceptions? (Should AI involvement be disclosed? How does transparency about AI usage shape user attitudes across different cultural or technical audiences?)

Content Strategy, Knowledge Reuse, and AI

  • In the context of content strategy and information management, how can AI be integrated to reuse and repurpose knowledge assets intelligently without losing context, nuance, or domain-specific expertise?
  • What new models of modularity, reuse, and versioning emerge when AI systems dynamically assemble or adapt communication content?

Emerging Roles, Skills, and Professional Identities

  • What new skills and roles are emerging for communication professionals due to AI integration? (E.g., prompt engineering, AI content curation, conversational design, model rating and evaluation.)
  • How should competencies for these new roles be defined, cultivated, and validated in professional development and training programs?

Rethinking Methods for AI-Era Communication Research

As AI systems increasingly mediate, generate, and transform communication, traditional research methods must evolve to meet new challenges. Techniques such as interviews, usability studies, ethnographic observation, and rhetorical analysis remain foundational. However, established methods were developed to study relatively stable, human-created texts and systems. Content that is probabilistic, adaptive, and continuously evolving in its nature needs additional scrutiny in the context of our research methods from both the perspective of a methodological resource and an artifact of study.

Professional communication researchers must expand and innovate our methodological toolkit, developing approaches that account for AI’s dynamic behaviors, hidden biases, opaque processes, and socio-technical complexity. Understanding how we study AI-mediated communication will be as critical to advancing the field as what we choose to study.

Adapting Traditional Methods

  • How must established methods be updated to account for the dynamic, adaptive, and often unpredictable behaviors of AI-mediated communication systems? (e.g. methods such as usability testing, ethnographic observation, and content analysis).
  • What adjustments are needed to maintain validity and reliability when studying communication environments shaped by non-deterministic AI outputs?

Identifying Emerging Methodologies for AI-Specific Challenges

  • What new research methodologies are needed to study AI-specific phenomena such as model drift, probabilistic content generation, explainability challenges, bias amplification, and hallucinated outputs?
  • How can professional communication research meaningfully incorporate emerging techniques such as model probing, prompt engineering evaluation, bias auditing, and dynamic risk monitoring?

Using Hybrid, Cross-Disciplinary Approaches

  • What hybrid research methodologies are most effective for studying AI systems that simultaneously produce, filter, and transform communication artifacts?
  • How can interdisciplinary approaches (drawing from computer science, UX research, science and technology studies, and communication) be strategically combined to study AI-mediated communication phenomena?

Designing Longitudinal and Dynamic Research

  • How can longitudinal studies, scenario-based simulations, living labs, or continuous monitoring systems be designed to track the evolution of AI capabilities, outputs, and risks within technical and professional communication environments?
  • How can researchers systematically capture both short-term user experiences and long-term systemic shifts in communication practices resulting from AI adoption?

Foresight, Risk Anticipation, and Global Governance

AI systems are developing faster than regulatory frameworks, technical standards, or public understanding can fully keep pace. Professional communicators must not only react to technological shifts, but play a proactive role in anticipating risks, shaping narratives, and influencing global governance strategies.

Foresight is essential to ensuring that AI development aligns with human values, social good, and equitable access to information. Research in this area must explore how technical communicators can participate meaningfully in risk anticipation, public education, policy advocacy, and the construction of global norms around ethical, transparent, and responsible AI use.

Frameworks for Risk Anticipation and Foresight

  • What foresight frameworks, scenario planning methods, or research methodologies are best suited to anticipating emerging risks and opportunities in AI-driven communication systems?
  • How can technical communicators help design early-warning systems or monitoring protocols that detect emerging ethical, safety, or trust-related risks before they become systemic problems?

Professional Communicators in Policy and Regulatory Ecosystems

  • How can professional communicators play a proactive role in shaping regulatory discussions about AI (e.g., hiring algorithms, content moderation systems, safety standards)?
  • What communication strategies are most effective for bridging technical complexity and public understanding in regulatory and governance contexts?
  • How can professional societies like IEEE ProComm contribute to international efforts to establish ethical standards for AI development, deployment, and transparency?

Comparative Global Approaches to AI Governance

  • How are governments across different regions (e.g., EU, U.S., China, Global South) regulating AI technologies, and what lessons can be drawn for global technical communication practices?
  • How can professional communicators assist in harmonizing regulatory frameworks across cultures, legal systems, and political ideologies without sacrificing local context or ethical nuance?

Short-Term and Long-Term Risk Integration

  • How can governance models simultaneously address immediate AI risks (e.g., misinformation, bias, surveillance) and long-term, systemic risks (e.g., autonomy loss, existential threats) without fragmenting regulatory attention?
  • What role can communicators play in balancing urgent risk communication with broader, future-oriented advocacy and education efforts?

Education, Pedagogy, and Professional Development in the Age of AI

The emergence of AI technologies is fundamentally reshaping educational environments, professional training models, and the competencies required for technical and professional communicators. Educators must rethink curricula, learning outcomes, and assessment methods to ensure that students and practitioners are not only proficient in AI tool usage, but also capable of critically evaluating, ethically applying, and innovatively extending these technologies.

Professional development must shift from periodic upskilling to continuous, anticipatory learning models that adapt to the rapid evolution of AI capabilities. Research in this area must examine how education and training systems can empower communicators to lead in AI-mediated environments while maintaining humanistic values, ethical rigor, and critical agency in an era of automation and augmentation.

Curriculum and Pedagogy Redesign

  • How should communication curricula be redesigned to ensure that graduates are both competent in AI tool usage and grounded in ethics, critical thinking, and audience-centered design principles?
  • Should AI-focused content be integrated across existing courses (e.g., writing, design, information architecture), or should standalone courses on “AI for Communicators” be developed? (What balance between technical proficiency and ethical reasoning best prepares students?)

Impact of AI on Learning Outcomes and Skill Development

  • What are the impacts of AI assistance (e.g., drafting tools, summarization aids, grammar and style checkers) on learning outcomes in writing, technical communication, and knowledge management education?
  • Does the use of generative AI tools enhance or erode students’ abilities in core competencies such as critical analysis, creativity, rhetorical judgment, and ethical reasoning?

Teaching AI Literacy and Critical Engagement

  • What strategies are most effective for teaching “AI literacy” within communication education? What areas of content knowledge and applied skills should be prioritized and developed?
  • What key concepts (e.g., data bias, algorithmic decision-making, prompt engineering, explainability) must be emphasized to cultivate a generation of communicators who are both skilled users and critical evaluators of AI technologies?

Assessment in an AI-Pervasive Learning Environment

  • How can educational programs fairly assess communication competencies in environments where AI assistance is ubiquitous? (E.g., Do new models such as in-class writing, portfolio assessment, oral defenses, or reflective analysis better capture authentic learning in an AI-mediated context?)

Ongoing Professional Development and Lifelong Learning

  • How can professional development for practicing communicators be restructured to address the continual evolution of AI tools and practices? (E.g., Micro-credentialing, modular certifications, project-based peer learning communities, industry–academia partnerships.)
  • What models of lifelong anticipatory learning are most effective in preparing professional communicators for careers shaped by rapid technological change?

Back to IEEE ProComm or the ProComm AI main page