The Starting Problem

For months, I was learning on the fly: documentation, tutorials, conversations with other developers, conversations with Claude. And nothing to show for it — just drafts, loose notes. Nothing written down, nothing structured.

I always understood the broad basics of a concept. But the moment I had to dig a bit deeper, I'd forget. That's still the case today — writing doesn't give me perfect memory, but it helps me retain better in the moment, and find things again later.

The real weakness: the things I'd actually dug into were often what I remembered least. The specific details disappeared first. And without a trace, no way to reactivate them.

That wasn't a problem as long as I was working short-term. As soon as I started stacking technologies — Docker, Swarm, Terraform, Ansible, AWS — I felt I was losing the thread.

The Trigger

Two things at the same time: I took a Claude subscription, and I was about to tackle my AWS + Terraform + Ansible project. A project bigger than anything I'd done before.

I realized that if I started without something to capture things as I went, I'd end up with scattered pieces of understanding and no way to find them again.

Before installing anything, I took the time to read up on two topics:

  1. How to better structure notes and knowledge. That's where I came across Zettelkasten (Niklas Luhmann's method based on atomic notes — one note = one idea — linked together with bidirectional links; the more a concept is connected, the more central it becomes) and PARA by Tiago Forte (organization into Projects, Areas, Resources, Archives). I took what resonated from each: atomicity and links from the first, categorization by usage from the second. I don't apply either one by the book.
  2. How to use Claude better for learning. Anti-dependence, assistance/autonomy ratio, prompts that challenge rather than hand out the answer.

Only after that did I install Obsidian and create a GitHub repo to host everything.

Concrete Setup

One thing needs to be demystified: it's not "an Obsidian vault". It's a GitHub repo that serves as a knowledge vault. Obsidian is just the tool that lets me visualize it, search inside it, and link files together. The notes are markdown files. If Obsidian disappears tomorrow, the repo stays readable with any text editor.

My repo is public on GitHub. If you want to see what it actually looks like, clone it: github.com/TuroTheReal/devops-cloud_vault.

Current structure:

  • concepts/ — how things work (Docker, Terraform, AWS, networking, Linux)
  • cheatsheets/ — useful commands per technology
  • projects/ — lessons learned per project
  • MOCs/ — maps of content, entry points by topic
  • troubleshooting/ — problems encountered and verified solutions
  • meta/ — templates and guidelines

In Obsidian, two things make the difference compared to a plain markdown folder:

  • [[note]] links — you link a concept note to a project, a troubleshooting entry to a concept. The tool rebuilds the graph of relationships automatically, and that's how you get to visualize that famously satisfying "neural network" once it starts to densify.
  • YAML metadata — tags, status (status/learning, status/mastered), difficulty, creation date. It lets me filter and query later.

My Concrete Workflow

1. Discovery and Validation

When I come across a new concept, I start by discussing it with Claude to lay the foundation. Then I validate by cross-checking with official documentation. That's non-negotiable.

My source hierarchy is strict:

  1. Official documentation — absolute source of truth, no discussion
  2. Recognized tech blogs (AWS, CNCF, Hashicorp, etc.)
  3. Stack Overflow / pointed technical discussions

What I refuse: generalist Reddit, non-tech blogs, rephrased content that cites other rephrasings. And in my Claude guidelines, it's spelled out: the AI must treat the official docs as the absolute source of truth. No approximations, no "I read somewhere that".

During this phase, I rephrase what I understood in my own words. I often try an analogy to picture the concept. If I can't rephrase or find an image, I haven't understood it — and I go back to the docs.

For concepts I already partly know, using AI to synthesize is a real time saver. But the moment we move to a specific problem applied to a specific need, you have to go back to the official docs or dig into the source code. AI without context gives a general answer, not a precise one. If I want precision, I have to lay out the context explicitly — otherwise I spend more time verifying its answer than I would have searching myself.

2. Contextualization with Claude

Once I've grasped the general idea, I create a dedicated Claude project for the technology or the project I'm going to work on. On top of my main guidelines loaded globally, that project contains:

  • The official docs I want Claude to have in mind
  • The precise context of what I'm about to do
  • The concrete exercise or project I'm tackling
  • The previous conversation too, to validate, establish the basics, and situate where I'm at for the AI

It's a lot of context engineering work every time. But that's what allows Claude to actually help me instead of giving me generic answers. AI without context = a better search engine. AI with my context = a counterpart that understands what I'm trying to do.

That's actually where I'm starting to focus more and more: context engineering rather than prompts. A good prompt without context is worth less than an average prompt with the right context. Moving from prompt to context engineering with AI is where I'm putting my energy.

3. Implementation and Real-Time Notes

I move on to the project. I try, I implement, I break things. I take notes as I go, not at the end.

This point is important because I made the opposite mistake at first: waiting until the end of the project to document everything. Result: I was already skipping things, I couldn't remember my first struggles, I couldn't find why I'd made this or that choice at some point. Now I capture things on the fly — even messy, even incomplete. Sorting comes later.

4. Formatting with My Templates (And a Healthy Dose of Iteration)

Once an important step is reached, I decide whether what I learned deserves to go into the repo. If it's a reusable concept, a new technology, a recurring or critical problem, in it goes.

That's where Claude helps me format. I give it my raw notes + the relevant template, and I ask it to structure. The absolute rule: concepts stay in my words. If I let Claude paraphrase on my behalf, I lose the trace of what I understood, and the note doesn't speak to me on re-reading.

I have several templates — one for concepts, one for projects, one for cheatsheets, one for troubleshooting. They're different because the uses are different, but I try to maintain some consistency in the overall form. Between two concept notes (concept A, concept B, concept C), the template is always the same. Same sections in the same order. Some sections are optional depending on the case, but the base stays identical. It lets me be familiar with the structure, find the info in the same place in every note, without having to think.

My current template is the result of many iterations. At the start it was too long, too complex. Writing took two hours, re-reading two months later gave me nothing more than a short note would have. I simplified, then simplified again. Now a concept note takes me 30-40 min to finalize, and when I re-read it I find my reasoning, not an AI-formatted summary.

5. Extracting Troubleshooting Notes

If I really struggled on a problem during implementation, I make a note in troubleshooting/. Not just the solution: the exact error message, the hypotheses tested, what didn't work and why, the final solution, and how to avoid it.

I've got three of those in my repo so far — two on Docker and one on Grafana. That's few, but it's not necessarily the goal to have many, nor to save time the next time it happens. These notes mostly cover problems critical enough to deserve an explicit trace, even if I might not run into them again for months.

The Guidelines: The Real Leverage of the System

The system doesn't hold up on Obsidian. It holds up on the guidelines I give Claude about context — what it needs to know, how it should work with me, what it shouldn't do on my behalf.

I have a global CLAUDE.md, loaded automatically on every Claude Code conversation. Inside: my tech context, my priority stack, my pedagogy (explain before implementing, challenge me, don't do it for me), my response style, my source hierarchy, and my safety rules.

At the very least, the source hierarchy is what it absolutely must respect: official docs first, recognized tech blogs next, the rest only for pointed discussions. No approximation, no rephrasing of rephrasings. That's the most important rule in the file — without it, everything else collapses.

On pedagogy, I ask it to explain before coding, to challenge what I think I've understood, to never hand me a ready answer when I can search for it myself. On style, French with technical terms in English, concrete, no fluff. On control, Claude never touches Git on its own, never runs a destructive command without explicit validation: it shows me, I approve, it executes.

And inside the repo, my own rule is simple:

My words must stay mine.

When I re-read a note, I want to find my voice, my words, my way of reasoning — not a clean AI-generated rephrasing. The day I no longer recognize myself in what I re-read, it's Claude that learned in my place, not me.

My Real Usage of AI

In practice, I'm at about 60% with AI and 40% on my own. And the usage is clear: it's to understand, to make what it gives me mine. I discuss with AI a lot. I ask it to explain, to challenge my understanding, to point out what I'm missing.

My current view: in some time, AI will already be coding better than us on many tasks. It already does for boilerplate (the repetitive code we rewrite on every new project with no real value: imports, config, base structure), simple refactors, generating config files. What it won't have for a good while is the full context of a project in a real company — the 45 constraints to take into account, the architectural decisions tied to business choices, the history, the hidden dependencies between teams.

So what I absolutely have to become solid on is understanding what I'm doing and why. The exact syntax of a command, the list of flags of a tool, the precise config of a file — I don't focus on that, it'll come with time and repetition. What matters now is the reasoning behind it.

The Honest Assessment

What really works: being able to dump notes in a rush, then knowing where to find them. That's it. I hit Ctrl+O, a keyword, I have my note. I remember what I'd understood, in my words, not in a generic rephrasing.

What's pretty but not essential: the graph view. I open it from time to time. It's a bonus — satisfying to see the evolution of my knowledge take visual shape. The graph reminds me that progress is happening, but it's not what makes me retain anything.

Mastery, for me, is being at ease with a subject, knowing the key points, and being able to make a piece of code or architecture you didn't write your own and do something with it. That's actually why my notes have a status: status/learning, status/practiced, status/mastered. Not because "mastered" means finished — tech moves constantly, there will always be new things, evolutions, unseen cases. Mastery in the strict sense is impossible. But a solid base, that stays. And the status tells me where I'm at on it at a given moment.

What I don't do: the spaced reviews at D+7, D+30. It's in my guidelines, I wrote them, I don't follow them. I prefer reactivating through practice — if I come back to Terraform in 3 months, I re-read my Terraform note at that point. More natural than forcing an artificial rhythm.

Evolution: Toward Context Engineering

My current vault is great for knowledge: it's where I capitalize concepts, patterns, lessons learned, per technology. But it has a limit — it knows what Terraform is, it doesn't know how Terraform fits with the rest of the stack I'm working on at a given moment. The project view is missing.

My next step is a second repo, with a different purpose: having a global, interconnected context on a project or a subject. Not generic knowledge, applied context.

At Alan for example, that's exactly what would be super useful to me. The stack is huge, the components many, the interactions everywhere. If the AI has a global, interconnected context of the project — the services, the conventions, the links between the building blocks, what depends on what — it can understand and map what I'm working on much better than with an isolated question. No need to re-explain everything on every conversation. The repo would be private, obviously, because it would touch internal context — but I'd still extract the reusable patterns back into my public vault in a generic form.

In a personal context, same logic. My Tech Radar for example: it's a project separate from the portfolio, with its own repo — but both touch the same site, share the same stack, interfere at deployment. Having a shared context between the two would let the AI know what depends on what, without me having to re-explain the relationship on every conversation.

That's where I'm trying to focus now: building solid context engineering, rather than polishing isolated prompts. A good prompt without context hits its ceiling fast. Good context makes even an average prompt useful.

Takeaway

Obsidian isn't the magic tool. Neither is Claude (or is it?). What works is the combination: a place to put things down in my words, an AI that helps me structure without distorting, and clear guidelines so that AI works with me and not in my place.

Disclaimer: all this is my method. Not a universal method. I built it by iterating, simplifying, throwing away what didn't serve. It works for me today and it'll keep shifting.

If you want to take inspiration from it, take what resonates, leave the rest, and test it on your own projects. It's the only way to know what fits you.