The potential and dangers of ai as it pertains to video and audio

Experts on AI video generation (tools like OpenAI’s Sora, Google’s Veo, Runway Gen-3, Kling, and others) highlight rapid advancements in 2025 and into 2026. These models now produce hyper-realistic, long-form videos with improved motion, consistency, lighting, and narrative depth, often described as crossing or nearing the “uncanny valley.” Professionals in AI, media, and tech note this shift makes AI video a practical tool for creative work, advertising, education, and filmmaking—moving beyond gimmicks to scalable, high-quality output.

Amplified disinformation undermines public discourse, fuels nationalism, and pressures escalation in conflicts. Widespread synthetic content leads to skepticism about all media, with surveys showing high public worry about inaccurate information and impersonation.
Ethical and privacy issues — Lack of consent in training data (scraping copyrighted or personal content), bias reinforcement (e.g., stereotypes in generated outputs), IP theft (lawsuits against tools for replicating characters/styles), and non-consensual manipulations (e.g., “nudification” or reputational damage).

Other risks — Job displacement in creative industries, reduced transparency in content creation (e.g., in news/journalism), hallucinations/inaccuracies in outputs, and potential for harmful content (e.g., explicit or violent generations).

Experts stress the need for safeguards like improved detection tools (beyond artifacts to multimodal verification), content authentication/labeling. While benefits like efficiency, accessibility, and creative augmentation exist, the consensus is that unchecked advancement risks outweigh gains without responsible governance.

BROADCASTING

Synthetic voices power AI DJs, news hosts, podcast hosts, or virtual assistants for specific segments. Companies like Futuri Media offer systems for content automation, while some stations experiment with AI for programming choices and even full AI-hosted shows. AI enables dynamic ad optimization, hyper-targeted local ads, programmatic buying, and listener-specific experiences (e.g., auto-generated playlists or effects based on preferences).

In newsrooms and stations, AI supports newsgathering, transcription, summarization, and workflow coordination. A 2026 NewscastStudio survey found over 94% of broadcast professionals expect AI to have the biggest operational impact this year, with adoption doubling in recent years (e.g., 25% of broadcasters using AI as of early 2025 reports, with more anticipating rollout).

Amid AI-driven text summarization reducing search traffic to articles, experts (e.g., Reuters Institute’s 2026 predictions) see radio/podcasts prioritized for investment (71% of leaders plan more audio focus) because spoken-word builds trust, habits, and authenticity—harder for AI interfaces to fully replace or summarize without losing nuance.

Many view AI as a “tool to support broadcasters, not replace the human connection,” aligning with themes like World Radio Day 2026. Leaders from iHeartMedia and others position major operations as largely human-led while using AI judiciously for efficiency.

Fears persist that AI DJs/hosts could reduce roles in voice tracking, production, or on-air talent, especially in cost-cutting environments. US audiences and professionals express rising anxiety over job loss, with some stations laying off staff while investing in AI. Experts emphasize adaptation—using AI as a strength rather than a replacement—but acknowledge faster change pressures.

Audiences want AI use disclosed (e.g., 94% in surveys), but revelations often reduce trust in stories/content. Experts warn of diluted human elements—radio’s core strength in local, personal connection.

Synthetic audio can fabricate quotes from journalists, politicians, or experts, spreading false narratives in radio segments, podcasts, or viral clips. This erodes trust in spoken-word media, which is often seen as more “authentic” than text. Incidents include cloned voices in political manipulation (e.g., fake robocalls in elections) or state-linked campaigns using cloned public figures. Broadcasters worry about deepfakes mimicking trusted personalities to endorse scams or propaganda, leading to a “crisis of trust” where audiences question all audio content.

Jason Remington

Admin/Editor | Airchecks KTOY (WA) | KVAC (WA) | KDFL (WA) | KONP (WA) | KBAM (WA) | KJUN (WA) | KRPM (WA) | KAMT (WA) | KASY (WA) | KBRD (WA) | KTAC (WA) | KMTT (WA) | KOOL (AZ)

Leave a Reply

Comments may be held for review by the Admin before being posted.