03 of ten

AI Skills

Working with the most powerful tools your generation will ever touch

In 1995, knowing how to use a search engine was a curiosity. By 2005, it was a basic literacy. The people who learned early did not get a small advantage; they got a compounding one, because every time they wanted to know something, they got the answer ten minutes faster than the people who didn’t. Multiply that across a decade and a career, and the gap becomes enormous.

You are now sitting on the same kind of inflection point, except the tool is more powerful. AI models can read, write, summarize, analyze, code, design, translate, and reason at a level that would have looked like science fiction five years before you read this. The cost of using them well is dropping by the month. The skill of using them well — the part that determines whether you get a brilliant collaborator or a confidently wrong machine — is mostly invisible to people who haven’t practiced it.

The good news: the skill is learnable. Faster than you expect. Most adults around you will never quite catch up. That window is your opportunity.

The core principles

The model is not magic. It is a pattern engine that needs context. If you ask a vague question, you will get a vague answer. If you provide rich context — who you are, what you’re trying to do, what you’ve already tried, what good and bad outputs look like — you get something closer to working with a thoughtful collaborator. The quality of your output is mostly the quality of your input.

Prompt engineering is the entry skill. Context engineering is the next one. A prompt is one well-crafted message. Context engineering is curating the entire information environment the model is reasoning in: which documents it has access to, what role you’ve assigned it, what the prior turns look like, what tools it can use. The shift from one to the other is the same shift as from “I asked a friend a question” to “I gave a colleague a project brief, the relevant files, and a deadline.”

Verify, always. Models can be confidently wrong. They will produce plausible citations to papers that do not exist, write code that compiles but does the wrong thing, and summarize documents in ways that subtly misrepresent them. The skill of verification — checking the output against the source, running the code, asking does this match reality? — is not optional. It is the difference between a competent AI user and a dangerous one.

Agentic AI is the next frontier. A model that answers one question is useful. A model that can plan a task, break it into steps, use tools, observe results, and adapt — that is a different kind of system. Learning how these agents are built (LangChain, LangGraph, MCP, frameworks that connect models to real tools) is the equivalent in 2026 of learning HTML in 1996. Even if you never build one yourself, understanding what they can and cannot do will shape every job you ever have.

The model is not your replacement. It is your amplifier. The amplifier is only useful if the signal you put in is good.

What changes when you have these skills

Tasks that used to take a day take an hour. Tasks that used to be too expensive to bother with become trivial. The bottleneck shifts from can I produce the output? to can I evaluate the output, choose the right one, and apply judgment? — which is exactly where humans still hold the advantage. Every domain you care about — writing, coding, design, research, analysis — gets re-shaped by this. The people who treat AI as a calculator that does specific tricks will fall behind. The people who treat it as an apprentice they are training will pull ahead.

  1. Use AI every day, on real tasks. Not toy questions — actual work: an essay you have to write, a piece of code that has to run, a study guide for a topic you're learning. Daily use is non-negotiable. You do not learn this skill from articles about it.
  2. Treat each prompt as a brief. Include: who you are, what you want, the audience, the format, examples of good and bad output, and what to avoid. Save your best prompts. Reuse them. Refine them.
  3. Always ask a follow-up. First answers are usually generic. Push: "What's the strongest counter-argument to this?" "What did you leave out?" "Rewrite this assuming the reader is hostile." The interesting work happens in the second and third turn.
  4. Build one small agent. Pick a task you do regularly — summarizing an article, organizing notes, drafting an email — and build a small agentic workflow that does it end-to-end. The first one will be clumsy. You will learn more from building it than from reading ten articles.
  5. Verify every load-bearing claim. If the AI tells you a fact you'd cite, look it up. If it writes you code, run it. Every verification you do trains your sense of where these models are reliable and where they bluff.
  • Course Andrej Karpathy's free YouTube series on LLMs — from how they're built to how they're used.
  • Site Anthropic's prompt engineering guide and Claude Cookbook on GitHub — practical, current.
  • Tool Use ChatGPT, Claude, and Gemini in parallel for a month. Their differences will teach you more than any single guide.
  • Repo LangChain and LangGraph documentation — start with the simplest agent tutorial and build up.
  • Book Co-Intelligence by Ethan Mollick — short, current, and full of examples of working with AI as a collaborator.

Try it now: a 10th-grader’s playground

The fastest way to learn this is to use it on something you actually care about. Pick a subject. Paste the prompt into Claude, ChatGPT, or Gemini. Read the answer. Push back. Notice what happens.

Chemistry — Make the periodic table click

I'm a 10th grader. Explain why elements in the same group of the periodic table have similar chemical properties, using one concrete example (Group 1 — alkali metals). Use simple language, give the electron configuration, and end with one experiment I could imagine doing to see this similarity in action.

Why this works: It tells the AI who you are, what format you want (explanation + example + electron config + thought experiment), and bounds the scope. Compare the answer to your textbook’s section. Notice what each does better.

Mathematics — Stop memorizing, start understanding

Teach me the quadratic formula like you're a tutor who genuinely cares that I understand it. First, *derive* it from completing the square, showing every step. Then give me a real-world problem (a thrown ball, a profit equation) where it actually matters. End with one common mistake students make, and how to avoid it.

Why this works: Asking for derivation forces the AI to show the why, not just the formula. The real-world tie cements the use. The “common mistake” prompt gets you the kind of insight a senior tutor would notice but a textbook usually skips.

Physics — Build intuition, not just answers

A ball is thrown straight up at 20 m/s. I want to *understand*, not just calculate, what happens. Walk me through: 1. What forces act on it at different points. 2. What its velocity and position graphs look like over time. 3. What changes if we threw it on the moon. Use plain English first, then add the math.

Why this works: You’re asking for understanding framed as a story. The “moon” twist tests whether the AI is reasoning with you or just retrieving a memorized answer. If it stumbles, that itself is information.

Social Science — Compare two events, build judgment

Compare the French Revolution (1789) and the Indian independence movement (1947) on three dimensions: causes, methods, and outcomes. What do they have in common? What's most different? Where did each fall short of its own ideals? Be honest, not flattering.

Why this works: Comparison forces structured thinking. “Where did each fall short” pushes past the textbook narrative into something more grown-up. The phrase “be honest, not flattering” genuinely changes how AIs answer — try it with and without and watch the difference.

English & Poetry — Reading like a writer

Here is a poem: [paste a poem you're studying — Frost, Tagore, Wordsworth, anything]. Help me read it the way a poet would: walk me through the imagery, the sound (rhythm, repetition), what's actually happening in the speaker's mind, and the one line that does the most work in the poem. Don't just summarize the plot.

Why this works: The instruction “don’t just summarize” is doing real work — without it, AIs default to plot summary. With it, you get analysis. Once you see this pattern, you’ll see it everywhere.

Then push back

After any answer, follow up with one of these:

  • “What did you leave out?”
  • “Rewrite this assuming I’m a skeptical reader.”
  • “What’s the strongest counter-argument to your explanation?”
  • “Where is this answer most likely to be wrong?”

The first answer is a starting point. The third or fourth answer is where the real learning happens.

Watch & learn

A small shelf of the clearest free explanations on the internet. Watch them in roughly this order — each one builds on the last.

Test yourself

Six short questions. Click an answer to see if you got it. The point is not the score — it is to make sure the ideas are sticking.

1. You ask an AI a question and get a vague, generic answer. The most likely reason is:

The quality of your output is mostly the quality of your input. Vague prompts produce vague answers — almost every time.

2. An AI states a fact in its answer that you want to use in your essay. Before citing it, you should:

AIs can be confidently wrong — they will produce plausible-looking facts and even citations to papers that don't exist. Always verify anything you would put your name to.

3. Which of these is the strongest prompt to use for a 10th-grade chemistry essay on photosynthesis?

A good prompt is a brief: who you are, what you want, what format, and what to avoid. The clearer the brief, the better the output.

4. You ask an AI to write code for a math problem. The code looks correct. The right next step is:

Verification is the difference between a competent AI user and a dangerous one. "Looks correct" is not the same as "works." Always run it.

5. "Context engineering" — beyond just prompt engineering — refers to:

Prompt engineering is one well-crafted message. Context engineering is the entire briefing — like the difference between asking a friend a question and giving a colleague a project brief plus the relevant files.

6. Two students use AI on the same physics problem. Student A copies the answer. Student B asks the AI to explain the concept, then solves it themselves. Five years from now:

AI is most powerful when it amplifies a mind that is already learning, not when it replaces the learning. The kids who use it as a tutor pull ahead. The kids who use it as a substitute fall behind.

Score: 0 / 6

Going deeper

When the basics start feeling comfortable, here are the next rungs of the ladder.

Start here

For the next assignment, project, or piece of writing you have to produce, work with an AI as your collaborator from start to finish. Not to copy from — to think with. Notice where it speeds you up, where it leads you astray, and what kinds of prompts produce its best work.