This is an unofficial resource. For the official MARADMINs site, visit marines.mil
Back to MARADMINs
MARADMIN 635/24

Guiding Principles for the Ethical Use of Artificial Intelligence by Communication Strategy and Operations

This MARADMIN establishes guiding principles for the ethical use of Artificial Intelligence (AI) by the Marine Corps Communication Strategy and Operations (COMMSTRAT) community. It mandates that AI tools must balance operational efficiency with national security protection and public trust, requiring transparency, human oversight, and adherence to DOD principles while prohibiting the creation of AI-generated photo-realistic imagery or news stories for public release.

Issued: December 30, 2024
1. As Artificial Intelligence (AI) technology,
including Generative AI and Large Language Model (LLM) systems,
enhances operations within the Marine Corps, it is crucial for the
Communication Strategy and Operations (COMMSTRAT) community to
manage the application of AI ethically and effectively. The
Communication Directorate mandates the COMMSTRAT community to adhere
to the following guiding principles for the ethical use of AI by
COMMSTRAT. These principles emphasize the requirement to balance the
efficiencies of AI with protecting national security interests and
maintaining public trust. Key aspects include the importance of
safeguarding national security, transparency about content enhanced
using AI, the necessity of human oversight in the release of
information, and an understanding of the inherent limitations of AI.
These principles aim to responsibly integrate AI as a tool for use
in the execution and evaluation of communication plans and
strategies while adhering to ethical standards and protecting
national security.
2. Use of AI will align with the DOD Principles of Information to
ensure the information released is truthful and accurate. AI can be
used to increase editorial efficiency or help inform understanding,
support legitimate command narratives, and counter mis-, dis-, and
malinformation. 
3. To uphold our responsibility to protect information when
utilizing AI tools, it is necessary to ensure adherence to security,
accuracy, privacy, and propriety standards. Only DOD-approved AI
technologies will be used.
4. Imagery and video are uniquely truthful and compelling mediums
for informing understanding. As such, we will not employ AI to
create photo-realistic imagery, video, or news stories for public
dissemination. AI will only be used to review written products or
to assist with corrective techniques to visual information as
specified in reference D. Products adjusted with AI will annotate
that adjustment in both caption and metadata (e.g., basic
correction of color done with AI).
5. AI can be used to automate processes and enhance understanding
of the information environment.
6. It is our responsibility to protect the trust of key publics
and the integrity and legitimacy of our commands and missions
through the content we release. Over reliance on AI can lead to
atrophy of our creative and technical skills and challenge our
proficiency in conventional content creation methods.
7. Release authorized by SES April Langwell, Director of
Communication, Communication Directorate.