Opinionated guide to writing the best code humanly possible

Achieving zero bugs and optimal workflow on a real-life app

Diego Oliveira Sánchez
9 min readDec 27, 2021
Writing code should be fun!

My name is Diego, I am the co-founder of NutriAdmin. We cater to nutritionists and also offer software for personal trainers and software for coaches.

I have been writing code since 2009. I have written and I maintain tens of thousands of lines of code (with no duplication), mainly for the two web-apps above.

Today I want to share how I write code nowadays, using examples from my latest product Mealplana, to achieve the following:

  • Zero logical bugs*
  • Zero code duplication
  • 100% unit test code coverage
  • Extremely simple files
  • Extreme ease of feature expansion
  • Pleasure to write code every day for an ever-growing project

*By zero bugs I mean that everything works as expected. There may occasionally be a display glitch, or an omission, but even this is rare.

I’m not claiming my coding style is best, but it works extremely well for me. Every year I have more code to maintain as projects grow, yet the work gets simpler because experience allows for more efficient architecture.

So how is this guide different? I’m aware there are plenty of coding style recommendations out there. Many of them are great, but mostly, I tend to find these are written by people that teach coding for a living.

My case is different, I run my own business and I need to balance code quality with delivering on time, sales, marketing, etc. I’m both the business side, and the engineering side at my company.

In other words, this is a guide of how I write the best code I can whilst being pragmatic. There are no ideas here that sound fancy in theory but are impractical. Everything shown here I do myself on a daily basis.

Hopefully this is of use for other developers trying to improve the quality of their codebase. I also hope my future me will read this 10 years from now and laugh at how inexperienced I was. There is always room to learn!

I’m also happy to hear recommendations and tips from other developers. There are plenty of people out there that know far more than me. Coding is so exciting in part thanks to the communities it fosters.

Let’s get started!

Style and Tools

My style is influenced by Robert C. Martin (Clean Code), and functional programming, as well as my own experience over the years.

I use Typescript and React in my web-app. My IDE of choice is Webstorm, my logic is written composing Ramda functions, and I run tests using a clever tool called Wallaby.

My app is a monorepo managed with NX. Webpack is my bundler of choice.

Regarding equipment, after years of lower-spec laptops, I’ve recently invested in a top-of-the-line Macbook Pro, with an M1 Max chip and 64GB of RAM. Tests that used to take 15 minutes to run now finish in under 40 seconds!

If coding is your life, and you are serious about it, my first recommendation would be to invest in the best tools for the job that you can afford and reasonably implement into your project.

Typings & Schema

It all starts with data and types. We use AWS amplify for our backend, and to define models in GraphQL for the different entities that are stored in our database.

Any model we define has only — at most- one level of depth. For example,

 type User {
id: ID!
companies: [Company]
}
type Company {
id: ID!
address: [Address]
}
type Address {
id: ID!
postcode: String
}

Each model tends to have at most around 10 properties. If it gets bigger than that, it typically will make sense to split it into 2 more streamlined models.

I find that this way of defining models is superior to having multi-depth, gigantic, nested objects for a variety of reasons:

  • Easier to re-use models, i.e. something like an Address will be used in multiple places
  • Ability to load just the data that is relevant in each situation
  • More efficient and specific queries
  • Simpler code
  • etc

Furthermore, Amplify provides a codegen tool that generates Typescript types for models, and for CRUD-operations. So types are only defined in a single schema, then are consistently used across the front-end and back-end alike.

The backend and front-end models are always in sync. If the backend changes a model definition. The Typescript compiler will notice where in the code an obsolete property is being accessed.

Modularisation

Nx allows me to structure the project as a multitude of specialised libraries, which act similarly to private npm modules — but without the hassle of package management/publishing.

Every module is specialised in a particular type of functionality, isolated and self-contained. There is a strict dependency tree (no circular dependencies allowed) and enforced boundaries.

The majority of the new code I write starts as a new lib — which means it’s almost like a new project from scratch. Then, any required dependencies will be imported as if they were npm modules.

For example, I may want to create a module for managing food (we provide nutrition software after all). This module may depend on the:

  • forms module (to enter data)
  • nutrition module (to perform nutritional analysis)
  • units module (to display quantities in grams, ounces, etc)
  • settings module (to show different options to users depending on their settings)
  • etc

Crucially, I can start working in my new module without having to understand or change anything about my dependencies. This approach makes the whole project extremely robust and easy to work with.

NX module structure

File structure

Files are organised hierarchically by functionality, not by “type” (e.g. i don’t group html files somewhere, css files elsewhere, etc). For instance, the Settings folder contains:

  • Billing Settings
  • Customisation Settings
  • Profile Settings
  • Security Settings
  • Team Settings

Within each folder above, there are further subdivisions. I.e. Billing Settings contains:

  • Billing Details
  • Cancel Subscription
  • Change Card
  • Change Plan

Then, within Change Plan there are lower level implementation details for this component. Sub-levels can go down an arbitrary number of steps.

Looking at the folder structure clearly shows intent. You can guess what each part of the file tree does without much effort, and every file/folder is roughly in the location I would expect.

Moreover, the file structure mirrors the user interface. “Outer” components, such as the overall container are at the topmost level, whilst small “inner” components such as a specific button are deep within the tree.

Any component that is re-used in more than one place will be part of a lib. I.e. User Interface components like ContentBox, Button, or Alert will be implemented (and tested) once inside the ui lib and be imported.

import { Button, ContentBox } from '@mealplana/ui';export const MyComponent = () => (
<ContentBox>
<Button
onClick={() => { console.log('clicked') }}
>
Click me
</Button>
</ContentBox>
);

My build process will automatically detect copy-pasted code and fail the build. Any duplication should be abstracted into a re-usable component.

Unit tests

Every component, hook, and file will be tested by an associated unit test file that is stored in the same directory with the same name and a spec extension. For example, BreadcrumbUtil.ts and BreadcrumbUtil.spec.ts will form a pair.

Unit tests only evaluate the code within the immediately adjacent file they are testing. Any external dependency is mocked.

For instance, there are dozens of files in Mealplana that use forms. Forms are imported from the @mealplana/forms module (an NX lib). Each single time forms are imported into another file they are mocked.

This way, I can avoid testing the same code multiple times. Code duplication also applies to tests. I.e. I shouldn’t have to test how my forms validate data every single time I use them! It’s redundant and inefficient.

From a productivity and focus perspective, I enjoy splitting my editor window in half, with the test file on the left, and code on the right, so that I get to write them both together side-to-side.

I use Wallaby to automatically run affected tests on any change. This tool provides near-instant feedback showing green/amber/red boxes next to each line of code:

  • Green: line is covered by passing tests
  • Amber: line is partially covered (i.e. some branches of the code were not executed)
  • Red: line is not covered

This workflow eliminates context-switching, as at any given time I will be working on a hyper-focused file with less than 100 lines of code, that is fully tested.

Once I finish working on a file pair, with all tests passing, I will commit them to the repo and forget about them. They constitute a self-contained unit, so I don’t need to keep track of them in my brain as I move to a new file.

Finally, every file has a clear input and output, strongly typed, so when I come back to work on a feature I only need to read a small fraction of code — trusting the rest of the system works as expected.

There is 100% code coverage in all code I write, with a couple of tiny exceptions of bits that use third-party code and are too hard to test (i.e. uploading a file, printing, etc).

Summing up: Typical day coding

How does all the above tie together, and how is my typical coding experience on a daily basis?

Mealplana provides meal planning software for professionals. Let’s imagine we want to implement a new feature: a template to download meal plans as PDF files with a nice-professional look.

Since every component in the software is isolated, I don’t need to remember much about how meal plans work in the software. I will open a file called MealPlanDownload.tsx that receives a MealPlan as an input and start there.

Furthermore, I don’t need to know anything about how meal plans are created, edited, or anything else. I assume that part of the software is working as expected and fully tested, and I can focus just on the downloading part.

Once I open MealPlanDownload.tsx I find that this file is subdivided further into:

  • A function that converts data from a meal plan into content the PDF exporter can consume
  • The download functionality that takes the PDF file as an input and initiates downloading

In this particular case, I can, again, ignore the downloading part, since I only care about format right now. Downloading will continue to work as long as the formatter returns an object that matches the specified interface.

This process can be repeated multiple times, digging through the small files in the project until I arrive at the exact place that needs to be edited.

Once I am there, I can relax, and enjoy writing a new code of file, without having to worry much about a system that keeps growing and growing in complexity every day, yet is designed to scale well.

In general, I will tend to add functionality, rarely changing the way something works unless there is a good reason for it (i.e. fixing a bug).

Extending code (as opposed to amending code) allows me to spend the majority of my time thinking only of new code I’m writing, without having to remember how bits that were written years ago work.

Compare this with monoliths where in order to make a change one must understand thousands of line of code and where each change triggers a cascade of broken dependencies.

A legacy, monolithic codebase is, regrettably, a common case in many companies. Developer productivity and morale always plummets in those projects.

Conclusion

Hope you have enjoyed reading this article, maybe you have even gleamed some useful ideas for your own project.

Coding is a skill that is easy to learn but hard to master. Every year I learn new things, and my style is refined incrementally.

When I look back, and I see code I wrote in years past, I am dismayed at just how utterly terrible it was! There is always so much to learn, and I am aware that my code will continue to (hopefully) improve into the future.

I feel extremely grateful that I get to spend my working days with this creative pursuit. I truly enjoy programming, solving technical challenges, and providing value to customers.

I hope my fellow programmers enjoy the journey too. I’m sure many people share similar sentiments.

Please feel free to discuss what are your thoughts and if you agree/disagree with the different ideas discussed. I’d be happy to see alternative approaches.

--

--

Diego Oliveira Sánchez
Diego Oliveira Sánchez

Written by Diego Oliveira Sánchez

Co-founder of nutriadmin.com, a practice management software and meal planning solution for nutritionists, dietitians, personal trainers, and coaches.

Responses (3)