Decision Making - My Second Brain
Building solid functional software and high-performance teams requires a treasure trove of information and insights, enabling informed decisions to be made promptly.
Considering the many options available to achieve these desired outcomes in the current engineering landscape can overwhelm the people who need to make such decisions. Frequently, it results in a decision-making process taking longer than expected or, worse, no decision made.
Maintaining a mindset of continuous learning, minimising blind spots ('unknown unknowns'), and staying up-to-date with the ever-changing landscape while embracing both the brand new and the evolving old limits how much information one can retain.
If I asked you what you ate for dinner 90 days ago, could you recall it from memory? Did you eat dinner 90 days ago?
This question highlights a common failure of our human memory. As more time passes, the more cloudy or forgotten low-level details become.
A second brain approach can help us fill in the missing parts in this case. We could answer the question if we had a calendar event with a restaurant reservation. 'Oh, you ate at a restaurant that evening.' Maybe you took a food picture as you were particularly proud of this meal.
However, an important thing to consider is whether you need an answer to this question. It is likely necessary if you're watching your diet closely.
We should only keep hold of information that is beneficial for our needs. We don't need to hold onto unimportant details.
Effective Memory Usage
I hold on to many high-level concepts with low cardinality to form part of my quick action toolkit while only keeping a much smaller amount of very low-level details with very high cardinality to support my maker execution.
These high-level concepts empower decision-making because they form the I know it exists, the why, the when to apply and the most common problems that will catch you out. However, if I need to move into maker mode, it would take time to refresh the low-level details.
This high-level view is frequently more than enough detail in a leadership position as you will delegate the low-level details and execution.
Let's consider an example where we can apply these high-level concepts.
Requirement: We must allow external traffic to invoke internal microservice APIs.
Pulling from our high-level toolkit, we can paint a picture of options and things we need to consider.
- What type of APIs do our microservice currently expose REST, gRPC or GraphQL?
- Do we need a translation layer?
- What are the authentication requirements, OAuth, API keys or OpenID?
- Should we use a plain API Gateway, or would a Backend For Frontend (BFF) gateway be more suitable?
- Should we use a SaaS or self-host?
- If so which one AWS API Gateway or Kong API Gateway?
- How will it be documented? OpenAPI?
The items marked in bold form part of the high-level concept pool. For example, with REST, we aren't concerned with implementing a 4xx response to align with a REST standard. Instead, the details of whether we're using REST may impact the decisions we need to make with how we produce the documentation.
Low-level details focus on the short term to execute the current task(s) at hand; these are the specific details of the problem.
These details are often required in the short term to deliver a given feature, e.g. the terraform definition to create some new infra.
These low-level details will be the core foundations to form your high-level concepts. Early on in your career, you will have a much larger focus on these very low-level details.
Let's consider an example where we can apply these low-level details.
Requirement: We must send a welcome email when new users register.
When applying low-level details, we think of boots on the ground as part of the delivery and much shorter timelines.
- What is the mail server API contract?
- What code should I write, and what tests should I write?
- What are the failure modes?
- What backoff policy will I use, and how will I implement it?
- What circuit breaker thresholds will I use?
- I need to populate X metadata fields.
- What exception should be thrown if the email bounces?
- What columns will I add to the database?
- What SQL am I writing? Is this valid SQL? Is it performant?
What Is The Second Brain?
The role of our second brain is to help expand the capacity of information we wish to retain; the most common candidates are areas that we only excise occasionally, as these areas risk becoming lost knowledge with time.
For example, let's say you've spent 6-months building a frontend web application, but now your next 6-month assignment will be focused on building a backend pricing engine. As you no longer exercise that frontend knowledge, it will start pushing to the back of your mind.
Using a second brain approach, we can save some of our learnings in a long-term persisted format (e.g., a markdown file or a script).
Two types of information are worth retaining:
The first is knowledge about the high-level concepts: what architecture did you use? What problems did you encounter, and how did you overcome them? Why did you use particular tooling? What would you do differently next time? Doing this helps you build that treasure trove of high-level information you can call back.
The second is low-level specific. The focus should be on reusable boilerplates, somewhat challenging areas encountered, or utility scripts that helped smooth the process.
Avoid anything easily found with a quick Google search or your favourite AI tooling.
I stress the need to refrain from filling your treasure trove with your employer's specific information; this information should already exist within code repository READMEs, wikis, architecture decision records (ADRs), etc.; there is no need to retain this information in your second brain.
Second Brain Setup
Now that we have looked into the type of information we can keep hold of to help make us more effective. Let's explore the tooling I use to bring this to life.
Overview of setup
I use Obsidian, which provides a lovely UI to interact with all the data I have collected.
Obsidian renders markdown files in an easy-to-read format with good search discoverability. The details stored within these documents will provide enough context to provide further reading rather than duplicating already well-documented information that already exists from external sources. We reduce the likelihood of working with outdated information with this approach.
I use IntelliJ as my interface for code-related snippets, primarily because IntelliJ is my default editor for all things code-related. I like to store most of my code snippets within their native file type, e.g. bash script within a .sh file.
In doing so, I get out-of-the-box IntelliSense and the ability to run the script; however, this is a personal preference you could just as easily store in the markdown file. These snippets are mostly utility scripts which form from the low-level details mentioned earlier.
git commit --allow-empty -m "Empty Commit"
Finally, we use Github as our Git provider; this gives us an easy way to sync this information across any number of devices, a record of version history, provides backups of our data, and most corporate networks don't block access to GitHub.
The above setup comes free of charge.
We have examined how we can utilise the second-brain mindset to retain knowledge in an approach that can overcome some of the shortcomings of our minds. With the drive to continually learn and improve, we want to ensure we're scaling out our brains as we progress in our careers/engineering journey.
While I have focused on this from a software engineering perspective, These approaches are not limited to software engineering.
What approaches do you take to retain information for the long term? What tooling, if any, are you using? Do let us know!
Thank you for reading.