I came across a very interesting idea from the author まじん (Majin) on note.com:
-
Original version of the prompt: https://note.com/majin_108/n/n39235bcacbfc
-
Updated and improved version: https://note.com/majin_108/n/nd11d1f88a939
Majin used Gemini to turn a single prompt into a complete Google Slides presentation, but I tried customizing it to run with ChatGPT (or Google AI Studio), and the results were quite exciting.
1. 🔍 Structure of Majin’s Prompt
Through analysis, Majin’s prompt has the following main components:
-
Role assignment for AI: The AI is not just a chatbot, but acts as a Data Scientist and Presentation Designer.
-
Absolute mission: From a piece of input text, the AI must output a JavaScript object array (slideData) that defines the entire slide structure.
-
Fixed blueprint: The prompt includes a Google Apps Script (GAS) framework, where you only need to replace
slideData
to generate Google Slides. -
SlideData includes:
-
title
: Slide title -
content
: Content (in the form of bullets, text, or tables) -
pattern
: Display type (Title, List, TwoColumn, Image, …) -
speakerNote
: Presenter’s notes
-
👉 The important point: The prompt does not directly create slides, but outputs JSON-like data that the GAS script uses to build the slides.
2. 🛠️ Customizing & Running with ChatGPT
First, download system prompt.md
🔧 Steps inside the Project
-
Create a new Project → give it a name (e.g., Google Slide Auto Generator).
-
Add a file → upload system prompt.md.
-
In the Project Settings / Overview screen, you will find the Instructions (System message) box → paste the line:
添付した「system prompt.md」に忠実に従ってください。 -
(Meaning: Please strictly follow the attached system prompt.md).
If you don’t see a dedicated “Instructions” field, ChatGPT may show “Description” or “Setup” instead → paste it there. It works as the System Prompt for the whole Project. -
From now on, in that Project, whenever you input raw text, ChatGPT will immediately reference system prompt.md and output both
slideData
+ GAS code.
👉 In other words: the Project’s Instruction acts as a dedicated System Message for that workspace. The UI may look different, but it’s always under “Overview/Setup” when you open the Project.
Next, type a prompt about the content you want to make slides from → and it will generate slideData
.
I tested it with content about LLMs and got the following result:
const slideData = [
{
type: ‘title’,
title: ‘The Busy Person’s Intro to Large Language Models’,
date: ‘2025.09.21’,
notes: ‘Re-recorded overview of LLMs focusing on what they are, how to run them, how they are trained, and why next-word prediction works’
},
{
type: ‘content’,
title: ‘Agenda’,
points: [
‘What is an LLM’,
‘Running a model locally’,
‘Training at scale’,
‘Why next-word prediction is powerful’,
‘Key takeaways’
],
notes: ‘Keep it practical and concrete. We start with the simplest mental model, then show inference vs training, then the intuition for why LLMs encode knowledge’
},
{
type: ‘section’,
title: ‘1. What is an LLM’,
notes: ‘Define an LLM at the simplest level and frame the talk around artifacts you can hold on disk’
},
{
type: ‘content’,
title: ‘LLM = Two Files’,
subhead: ‘Weights + Runner’,
points: [
‘Parameters file holds learned weights’,
‘Runner implements forward pass of the network’,
‘Self contained package runs offline’
],
notes: ‘Stress that inference can be simple and local. The interesting part lives in the parameters learned during training’
},
{
type: ‘content’,
title: ‘Example: Llama 2 • 70B’,
subhead: ‘Open weights from Meta’,
points: [
‘Sizes include 7B 13B 34B 70B’,
‘Open weights and paper enable local study’,
‘Closed APIs trade control for convenience’
],
notes: ‘Contrast open weights with closed APIs like hosted assistants without exposing internals’
},
{
type: ‘section’,
title: ‘2. Running a Model’,
notes: ‘Give a feel for local inference and why the demo used a smaller model for speed’
},
{
type: ‘process’,
title: ‘Local Inference Flow’,
subhead: ‘From files to generations’,
steps: [
‘Get weights file on disk’,
‘Compile or install a runner’,
‘Point runner to weights’,
‘Send a prompt’,
‘Stream generated tokens’
],
notes: ‘Mention that a 7B model is fast enough on a laptop while 70B is roughly an order of magnitude slower’
},
{
type: ‘compare’,
title: ‘Open Weights vs Closed API’,
leftTitle: ‘Open Weights’,
rightTitle: ‘Closed API’,
leftItems: [
‘Full local control’,
‘Offline capable’,
‘Tunable and inspectable’
],
rightItems: [
‘Best quality access’,
‘No infra to manage’,
‘Faster to integrate’
],
notes: ‘There is no universal winner. Choose based on control privacy latency and capability needs’
},
{
type: ‘section’,
title: ‘3. Training at Scale’,
notes: ‘Explain where weights come from and why training is expensive compared to inference’
},
{
type: ‘content’,
title: ‘Training Ingredients’,
subhead: ‘Data Compute Cost’,
points: [
‘Large curated text corpus’,
‘Massive GPU clusters for days’,
‘Significant budget for runs’
],
notes: ‘Use order of magnitude thinking rather than exact numbers. Emphasize engineering and data work beyond raw compute’
},
{
type: ‘timeline’,
title: ‘From Corpus to Weights’,
subhead: ‘Very high level lifecycle’,
milestones: [
{ label: ‘Collect and filter data’, date: ‘Step 1’, state: ‘done’ },
{ label: ‘Train on GPU cluster’, date: ‘Step 2’, state: ‘done’ },
{ label: ‘Evaluate and align’, date: ‘Step 3’, state: ‘done’ },
{ label: ‘Release weights and runner’, date: ‘Step 4’, state: ‘next’ }
],
notes: ‘Keep the audience at the bird’s eye view. The output is a weights artifact you can reuse everywhere for inference’
},
{
type: ‘section’,
title: ‘4. Why It Works’,
notes: ‘Bridge from mechanics to intuition about capability’
},
{
type: ‘content’,
title: ‘Next Word Prediction Objective’,
subhead: ‘Simple objective rich consequences’,
points: [
‘Model predicts a distribution over next token’,
‘Good prediction forces world knowledge’,
‘Patterns styles and relations get compressed’
],
notes: ‘Relate prediction to lossy compression intuition. To predict the next token the model must encode structure facts and relationships’
},
{
type: ‘content’,
title: ‘Common Misconceptions’,
points: [
‘It is not copy paste of the web’,
‘Bigger is not always better’,
‘Closed is not always better than open’
],
notes: ‘Clarify generalization vs memorization and the role of data quality and algorithms along with scale’
},
{
type: ‘section’,
title: ‘5. Takeaways’,
notes: ‘Land the plane with pragmatic guidance’
},
{
type: ‘content’,
title: ‘Practical Guidance’,
points: [
‘Choose model size for latency and cost’,
‘Use open weights for control’,
‘Use hosted APIs for frontier quality’
],
notes: ‘Map choices to product constraints such as privacy cost latency and maintainability’
},
{
type: ‘closing’,
notes: ‘Thanks for watching. Invite questions and link resources in the description’
}
];
Next steps
-
Open the
.md
file on your computer. -
Copy the code from the
.md
file. -
Open Google Slides → create a new slide deck.
-
Click Extensions → Apps Script.
-
Paste the copied code into the Apps Script editor.
-
Paste your
slideData
. -
Press Run
3. ✅ Experience & Results
-
Works well on ChatGPT: No need for Gemini, GPT-5 is enough.
-
Advantage: The prompt standardizes the output into a JSON-like object, making it easy to control.
-
Reference implementation:
-
Example Google Apps Script project on GitHub: generate-slide-appscript-example
-
Example generated Google Slides: Demo Slide Deck
-
📌 Conclusion
-
Majin’s prompt is a great framework to turn AI into an automatic slide design tool.
-
It doesn’t have to be Gemini — ChatGPT (GPT-5) also works well.
-
You just need to customize the input → and you can generate Google Slides for any topic (training, pitching, learning…).
👉 This article was written with reference to blogs by まじん (Majin):
-
Note.com – Googleスライドが一瞬で完成する“奇跡”のプロンプト
-
Note.com – 改良版まじん式プロンプト