A top tip for new Software Engineers

Every time you start a new job there is newfound energy that comes with the new territory. You feel rejuvenated and ready to learn. This is a powerful force and when used correctly can speed up learning in your new environment as a software engineer extremely fast. Typically, you enter a new working environment with less bias about what you prefer to work on. This bias slowly accumulates over time and will eventually result in a preference.

I can write this article from a couple of different points of view. I am focusing purely on maximizing domain knowledge as a new Software Engineer and learning many different skills. This is not for everyone but I realize that most engineers struggle to understand their domain because of a lack of communication channels and or engagement or maybe just the sheer volume of information that overwhelms engineers.

Success depends on what you are willing to put in!

The noise at the office.

Before the world had to deal with Covid-19, I spent most of my time in an open office environment with loads of noise around. This noise was mainly people discussing work, yes, and the occasional social discussion, well let’s be honest a whole lot more of that. We aren’t part of these work desk discussions thus why would it be relevant to us?

The accepted solution is to cancel this out by putting on our earphones with some of our favorite music or podcasts to block it out and focus on work. Nothing wrong with that. Software engineers need solid focus time.

However, whilst it could be perceived as noise I would argue differently in that there are copious amounts of gold nuggets in the audio moving around the office. In previous jobs I had, I would remove my earphones and listen to the office chats and write down some of the terms or jargon I heard for exploring later. If you struggle to work and listen at the same time then take 15 minutes when the chatter is high or specific people are conversing and write down what you hear.

This presents a good opportunity to approach these colleagues and ask what this means in the context of the business. This learning did not fall on your lap accidentally you were looking for it and probably learned faster than you would have. Active learning! And, bonus! you showed interest and socialized with a colleague.

Doing this continuously over time will increase your knowledge and the positive by-product is to give you the ability to become part of the conversation if you decide to.

I’m working from home now!

“Ok, but with Covid-19 I started working from home. What do I do now?”.

Chat applications are the new office environment and most business-related chats at the office will move into some chat or personal channel on Teams, Zoom, Slack, or Google Meet.

There is an obscene amount of chatter in these channels and the noise (information) can be so much more than in an office. These are much easier to ignore no earphones are required. If you are new to the domain all the chatter in the chat application will be the gold nuggets you need. It is tough to follow all the channels and notifications have become the poison to our attention span.

Schedule 15 minutes during morning, midday, and evening respectively to work through these messages and write down terms or concepts you aren’t familiar with. Approach the person in a personal chat and ASK or ask in the next general meeting in a video call direct. Typically asking a question in a chat on a large channel can result in “bystander effect” where everyone else thinks someone else will answer potentially ending up with no response. This is a real phenomenon. Don’t take it personal

Remember, you will eventually learn the domain but it is in YOUR control how fast you want to learn. Learning one domain gives you the experience to more easily navigate the next.

Conclusion

Fostering this kind of awareness early on in your career will set you up for success. When you are new to a domain you are often given overwhelming knowledge transfers during hours of meetings and/or documentation that don’t provide a good starting point but rather context skipping how to get to the document is explaining.

Using this strategy allows you to put the pieces of the puzzle together until you have the full context and will greatly assist in adding context to those KTs and documentation.

Don’t wait for people to tell you what to do or feed you the info. Be proactive. Awareness, observation and just being a good listener promote active learning. This sets you apart and shows initiative which are skills managers want to see!

A top tip for new Software Engineers

A brief look into the decentralized web 3.0!

My first encounter with the term Web 3.0 reference was years back when Web 2.0 only started. As humans, we get bored quickly and don’t always appreciate what’s happening in front of us so we try to predict the next big thing! 

All jokes aside. Tim Berners-Lee, the father of the worldwide web coined the term “Semantic Web” in a scientific paper, back in 2001. A semantic web seemed promising. The goal was to have pages on the world wide web be machine-readable!

Machines can read and share data in a standardized manner with each other in much the same as humans do. Siri, Google, and Alexa’s jobs could have made it easier to find information. Imagine we ask them for information and there was less snarkiness from humans when returning with the wrong answers! 

Truth be told, I would have preferred Web 3.0 this way. It would have been a massive yet worthy endeavor. This endeavor is large and requires a lot of really smart people to work together. 

Sadly, it is still a pipe dream somewhere on a scientific paper.

Now, Imagine my surprise when I first read Web 3.0 is referred to as the “Decentralized Web”. Cryptocurrency came and turned what we know upside down.

Because the decentralized web is more recent and the semantic web had its chance I will acknowledge that the latter can proudly wear the Web 3.0 badge…for now.

The problem was Web 2.0

Web 2.0 is amazing! The web made strides in improving the user experience enabling data collection from users over time and as computing capacity became cheaper it was easier to scale, more users signed up and “gave away” their personal information for these free services.

The companies became data collection agents. Think Facebook, Google, and Twitter…these are centralized applications meaning all our data flows to their central location they have full control over. These are free services though where we spend much of our free time! Maybe too much? As we know better by now free is never free. It came at a cost. Our precious data gets collected and sold to interested buyers. How else would these companies survive? 

The other problem that raised its ugly head is censorship. Don’t get me wrong, I have no time for racism, harassment, and bullying of any kind. I despise it! These companies took care of these problems not perfectly but nobly. However, these platforms gave people a voice they never had before.

These platforms or companies have the power to remove any posts they do not agree with. There is a bias built into the system where freedom of speech is hard to practice because the pre-configured algorithms in accordance with the company’s bias don’t agree with you.

This is a control that should concern us. On these platforms, we are not masters of our data and we don’t have the freedom to speak our minds about whatever we feel strongly about. Or you can try and hopefully, the platform “agrees” with you.

Nevertheless, Web 2.0 is still awesome! It brought much more good into the world.

How will Web 3.0 be different?

There are 2 problems I want to focus on in explaining the existence of the decentralized web. The problem I previously identified, was users don’t have full control over their data. Another problem is how can I prove I own a specific digital asset like a photo or a video. Digital Ownership!

Think about the inverse of these problems. I own my data and I can find and prove that it is mine. 

A Decentralized what

Instead of having centralized servers in one location, there can be nodes – a set of servers in a location – in multiple locations ideally all over the world. Each node has equal privileges. This prevents a node from becoming a single point of failure as well as promotes fairness. In other words, if the servers crash the data is still available. When that server comes back up again the data is easily restored.

The Blockchain

A distributed model as was described above can easily prove that an event took place like a transaction or a conversation or an opinion (think Twitter and Facebook). With the inception of bitcoin, the world’s first successful digital currency and important to note it is a digital asset, the idea was to make a transaction anywhere on a peer-to-peer network or decentralized network visible on a public shared ledger.

There are multiple nodes (servers) on this network and what makes blockchain powerful is the algorithm it provides for all nodes to reach a consensus on the authenticity (state) of a transaction before it is posted. As of today, there are more than 1000 blockchains supporting 4 types of blockchain networks.

Smart Contracts

Nick Szabo, a computer scientist, and lawyer in the 1990s coined the term smart contract. A smart contract is a digital agreement by all parties. Once all conditions have been met the transaction or event is permanently stored on the blockchain. This is effectively the terms of the agreement. These conditions are computer code that runs to automatically check all the conditions that were created before the agreement.

This programmatic and automated condition removes the need for a manual intermediary, in other words, a human with expensive skills to facilitate the flow of value. Once a smart contract is deployed on the blockchain it is very difficult to alter the conditions. The last important note is that there may be a fee involved called gas to be paid. This helps pay for the blockchain infrastructure and is not expensive at all.

The Blockchain network

A blockchain network is an infrastructure providing access to the ledger and smart contract services. These networks are created by a consortium which can be individuals or companies and is governed and protected by a set of rules this group creates before the launch of such a network. There are various types of blockchains, for example, public, private consortium, and permissioned. For this article, it is not important to understand the various types. We will just focus on the public blockchain network as this is the most decentralized of all.

Blockchain Protocol Layers

Blockchain technology consists of multiple layers as part of the blockchain architecture. These layers are for hardware, data, network, consensus, and application. One thing I noticed while doing my research was the differences in the number of layers.

I want to encourage you to read up on the architecture and I want to shift the focus to the blockchain protocol layers. The protocol is the important set of rules governing the blockchain. It dictates how the blockchain must operate. I want to focus on 2 layers out of a couple in the protocol layer.

Proof-of-Work vs Proof-of-Stake

Layer 1 is responsible for ensuring the security of the chain by using a consensus algorithm protocol like proof-of-work (PoW) or proof-of-stake (PoS). Bitcoin uses PoW to validate blocks of transactions while Ethereum 2.0 migrated to PoS in September of 2022 away from PoW.

The main difference between proof-of-work and proof-of-stake is that proof-of-work consumes an exorbitant amount of energy and has become inefficient and expensive. At the time of this article, it costs more to mine a bitcoin than the current price of one!

With the computations becoming more complex and time-consuming as we are moving closer to the 21 millionth coin, with the last bitcoin to be mined in 2140, one can only imagine the electricity bill a large corporation will need to fork out to mine one bitcoin!? Only a few individuals and organizations can afford these efforts today which is negating the decentralized focus of cryptocurrency.

Proof-of-stake requires an individual or organization to purchase large amounts of a particular cryptocurrency to qualify to become a validator of transactions and stake it on the blockchain. This is more environmentally friendly but it has its own challenges. Unfairness on the chain by these validators results in penalties and loss of stakes coins.

A huge amount of load is placed on layer 1 and as it grows it becomes slower to execute and validate transactions. In order to address, the scalability issues on layer 1 layer 2 were built on top of layer 1. Layer 2 provides off-chain solutions to help address issues and reduce the bottleneck of the first layer by creating contracts and transactions before these are written onto the blockchain. Introducing layer 2 lowered transaction fees and reduced the load on layer 1. Layer 2 provides utility where smart contracts are created and decentralized applications can be used.

Decentralized Autonomous Organization (DAO)

A centralized system typically is governed by a company or organization which is its own and can be said that it is simpler to manage in a sense because it only has to meet the needs of that organization and its interested parties.

A DAO is a group of humans incentivized through a token mechanism to agree on, create and abide by rules. These rules are then programmed into smart contracts. The consensus on these rules is reached via a majority vote. Once these contracts are executed when all the conditions are met it is written into the blockchain forever. The rules are visible and can never be disputed. DAOs have no hierarchy.

DAO’s vision is to operate using Game Theory principles where cooperating usually brings out the best outcome for all instead of individuals defecting. This encourages rational thinking and removes selfishness from the equation. Any person defecting has the worst possible outcome for everyone.

Decentralized Finance (DeFi)

DeFi challenges our way of thinking about our money and the management thereof. Instead of intermediaries such as brokerages, banks being the gateway for lending and borrowing, and other financial services, people rely on other people via a peer-to-peer mechanism for all these services. The key component in DeFi is smart contracts because it is automated, immutable (cannot change), and transparent. DeFi already boasts a $200b turnover in 2022. Decentralized app or DApps enables all these financial functions on the blockchain. 

Non-fungible tokens (NFTs)

The word fungible means that an item can be traded for a similar item like money for example. Non-fungible means that an item is unique and cannot be replaced by something else. It is not interchangeable.

A non-fungible token is a digital asset that represents a real-world item like a digital picture, video, a tweet (Jack Dorsey’s first tweet). This digital signature uses smart contracts and programmable rules and is stored on the blockchain. This is mathematical proof of ownership and authenticity. NFTs are in their infancy and initially, it was more a meme than a real-world application.

The NFTs craze happened and now the bubble popped so to speak. It had to start somewhere. We aren’t sure what to do with NFTs yet. It is an important component in Web 3.0 and is paving the way for digital ownership on a decentralized network there making large strides in solving the copyright problems that are running rampant on the internet today. 

Challenges and Beyond

Creating a “new internet” requires many people to collaborate toward the same goal. That will be challenging. Jack Dorsey famously argued against Venture Capitalists funding projects towards web 3.0 development. This poses the question if a web can be decentralized when a corporate is buying into its development? When I fund someone I expect some return, right? Same in this case. If corporations have a majority interest in decentralized networks aren’t we back to being centralized again? Self-interest is the biggest threat to the decentralized web.

Another challenge is geolocation jurisdiction. These networks will be on different nodes all over the world but you have to respect that countries have their own rules with regard to the digital space.

The decentralized web and blockchain may solve the General Data Protection Regulation or GDPR which is concerned with the protection of the personal data of European citizens. It remains to be seen.

Conclusion

I hope you enjoyed my first journey exploring the fascinating yet complicated landscape that is Web 3.0. One of the biggest challenges today for this complicated geolocated system is its energy consumption and initial costs for the validation of transactions. Another challenge as I previously mentioned is self-interest from a small group of individuals and corporations threatening decentralization. Crypto fraud on exchanges and blockchains is rampant. Think Luna, SBF_FTX, Mt. Gox to name a few.

Blockchain is widely used but we have not seen the mass adoption and utility that was promised by all these companies with their tokens and coins. BTC and Ethereum seem to stand the test of time. I hope there will be more adoption from the government and corporations of the blockchain.

Non the less the evolution of Web 3.0 is fascinating and it could be an important stepping stone toward whatever Web 4.0 may become.

A brief look into the decentralized web 3.0!

How to improve one-on-one conversations as a Dev Manager

I love having a one-on-one sessions with my manager. This time I associate with productivity and growth not to sound selfish but for 30 minutes it is about me and my career. The feedback is immediate on what I’m doing well, and what I can improve. No use waiting for the year-end review to first learn what I could have done better! The improvement starts immediately.

But I have learned from my failures and past managers what I prefer not to do to myself and what I should do for others.

On Linkedin I often see my connections post frequent motivational posts regarding leadership. I know some of these people who post because I worked with them. Therefore, I have some insight into their circumstances and how their superiors acted during their time at the company.

This is just my personal opinion and I have no data to back this. However, often I see these posts are about “leader vs manager” and cannot help but think this is aimed at their previous manager because of a disappointment for not growing or valuing them which ultimately became the reason for their departure.

The point I’m trying to make is your actions as a manager can affect a person negatively for a very long time. Someone is not having fun because of your ignorance and actions. THAT IS POWER!

Easy steps to effective conversations

I have rock-solid steps to improve one-on-one conversations with your engineers almost immediately. It is specific to online meetings because it is where the worst of our habits presents itself. In the face-to-face meeting, listen, engage, and be respectful. Success will depend if you are open to it as a manager and not necessarily depend on the employee. These are easy to do and all it requires is a shift in focus.

Switch on your camera

Let the person on the other side be assured they have your full attention. It is important to carry across a message with the correct body language. Tough conversations for example must have body language to show that this is not emotionless but comes from a deeply concerned and caring nature.

Do not multitask

In one-on-one multitasking is a big no-no. It is easy to spot when someone is reading while in a meeting. It takes away the focus from the conversation and affects the memory which is essential for a decision to be made on behalf of this person’s career growth. Relax your shoulders or cross your fingers in front of you. These conversations can provide us with valuable insight and clues that may be useful in other discussions, creating new opportunities for the engineer. This conversation takes 30 minutes. Being respectful to them cultivates respect for you as their manager.

Keep notes

Simple but effective. If you recall past conversations there is nothing more indicative that my manager is listening to me. On the flip side, managers can use it to hold the engineer accountable and honest.

The above steps are simple yet highly effective.

The manager’s part in the conversation

Typically, when I hire a new engineer I ask them what they want to work towards. It typically is seniority, architecture, analysis, or leadership. There are many people in an organization they can partner with to learn from. We need to help facilitate these connections and create opportunities for our people.

During every other one-on-one, we revisit their progress toward their goals. It is a good opportunity to identify gaps. For example, maybe they are too code-focused and need to learn, as a future software architect how the entire system functions. The manager can intercept this early on and advise the engineer on how to improve their approach.

As I previously mentioned, imagine you as the engineer only receives feedback at their end-of-year review! I kid you not, this happens! I have experienced this at three of my previous jobs. It is a long time to wait to get feedback. What if I’ve done something wrong and only feel the “punishment” (feedback or reward) end of the year?

The need for negative feedback originates from an event and therefore must be dealt with immediately. Immediate feedback kick-starts the remedial process and allows for course correction to take place for the engineer. It preserves the person’s confidence in the long run.

But as managers, we often miss these opportunities.

As a manager and someone who is concerned with another’s growth and advancement, this presents a fantastic opportunity to transform failure into success. Taking a keen interest in your engineer’s career and staying up to date with their activities enables immediate course correction if it is required.

This must be the conversation almost every one-on-one with the person. There should be no surprises at the end of the year review.

We as managers do not always have access to budget or promotions and even if we do there is always such a balancing act and it is impossible to give to everyone. Therefore, taking a keen interest in someone’s career and growing them is in your control and is the best we can do for our people. It may just keep them a little bit longer at the company because they are being valued and we take interest in them.

It doesn’t always have to be money and title

I cannot overstate it enough the influence the manager has over their engineers. I’m sure you have seen many posts on LinkedIn saying; “People leave managers, not companies”. I say this is true over and over. Managers are also in the spotlight when engineers leave companies due to the type of work they perform. The manager can help by finding a new role or moving the engineer onto a new project or team.

Conclusion

If you find managing people easy you are doing something wrong. If you are managing people just for the title, stop doing it and pivot your approach. Your responsibility is to grow your people into a better version of themselves. When they eventually leave the company and many do you can feel proud that they are leaving better than what they started. In the end, they are leaving because of their managers because you helped them become a better version of themself.

How to improve one-on-one conversations as a Dev Manager

How to empower teams to better support software systems?

Creating new systems from scratch is so much fun. I love it when you can dream up a project. I have a candy shop full of technologies I can choose from. It is fun creating all those shapes and connecting the lines when laying out the architecture of the system. The highlight for me is when the development starts. Not so much fun but necessary is setting up the CI/CD pipelines, and then that magical moment when you promote the production application! I have the best job in the world!

Well, not for all engineers. What about the engineers or team(s) that have to maintain the system? these engineers don’t have the in-depth understanding of the system I had because I was there from day 1. During any previous planning phases, did we think of them? Probably not.

I want to put the focus back on two often neglected functions to ensure that support and maintainability are taken into account during the initial stages and not reactive to it. This will make supporting any system a more pleasant and productive experience for the next engineer or team.

Handover to another team for support

Once a new system moves into the production environment that is when the real “fun” starts. It is seldom the case that Team A  develops the system or the feature and maintains it until its end-of-life. Let’s for a moment assume that is the case at some point people leave, and the team with the same name now looks different but the system has not changed apart from new enhancements or more features.

Team A, who is the original developer of the system hands it over to Team B, the team supporting the new feature. Team A with a specific skill set and a high-performing fast execution team moves onto a new project. Energy flows where the attention goes. Team A neglected to create sufficient documentation on potential troubles the system experienced during the development and initial production phases.

When Team B takes over it more often than not requires a handover meeting with Team A. Team A now needs to spend energy to get all the documentation up to date and add additional documentation as gaps were identified by Team B’s efforts. The timing sucks because Team A has to context switch and create more documentation in a hurry because they have other priorities to deal with and the focus has already shifted. The quality of the documentation as well as the communication during the handover suffers. 

Look at the following scenarios

Team ACreates complete documentation
Team B Read all the documentation. Is self-serving and productive
Table A

Team A Creates incomplete documentation
Team BReads incomplete documentation. Identifies the gaps
Team AMeets with Team B
Team BMeets with Team A
Team AUpdates outstanding documentation
Table B

I’ll admit. Table A looks like a pipe dream. Nonetheless, let’s marvel at the beauty of it. Extremely efficient!

During the development phase, many issues are identified and fixed which provides a good opportunity to document these problems for future reference. Not all problems have to be documented because of code fixes but there will be some processes and pipelines problems that will re-occur again.

Production is the best place to identify the one-percenters. Systems behave differently in different environments. The production database is much larger than in other non-prod environments. These lower environments have obfuscated data and much less of it. During the initial production phase, there will be enough new problems to document because of the sheer volume and combinations of data to be served.

It was impossible to account for all these scenarios and exceptions during the development and testing phases. ETL Processes fail and they will. Pipelines break during a deployment to production and there will be some other data issues. It is a perfect opportunity for Team A to document all these issues.

In addition to troubleshooting documentation, there must be installation and configuration guides as well. Think of EVERYTHING that will make life easy for newcomers to the system.

Documentation should be written like you are explaining to a non-technical person because often we try and help another person by assuming they have a certain predefined context of the system. That is where we often create problems for ourselves and waste our own time. People will come back and come back again because we neglected to explain the entire context before providing the solution.

Administering and supporting the system

I fail to recall any time in my career in software during the design and development phase when the architects and engineers envision how the system might function, and what the potential problems or one-percenter scenarios will be. It may have been part of the process initially but eventually falls behind due to time and money pressures. It is a habit to only think of the happy path.

We use the best coding techniques and practices. We use all the patterns that make the system robust. We have done everything to make the system perfect.

Then we run into problems…

Most large systems are dependent on a wide variety of data from different sources. In my experience most often data is fed using large ETL batch processes or it can be a high-volume transactional system or both! It becomes complicated to apply any large-scale fixes during failures, data removal, or large-scale data integrity problems. Flexibility is gold!

These will be the standard questions to ask to ascertain from a technical point of view if the system can be remedied at its simplest.

  • Is there a job/process that can be run to fix any data issues which might have been processed incorrectly or incorrectly inserted during input?
  • Can these processes be run during business hours?
  • Does everyone on the team have sufficient permissions or access to the servers?
  • Do we have the ability to update records in batches?

If it is yes to all the above it is still not the ideal situation. Some engineers lack confidence and if these services are restarted and fail or done at the wrong time it can affect clients and SLAs. Not everyone has elevated permission and access to database servers for example. We often lack the flexibility and by that I mean tooling to remedy any data processing problems at a large scale. Let’s take a look at the potential two areas which can make optimize support in this area.

Automation

In the outline of work, we have the luxury of the skill to automate critical business processes and/or mundane tasks we have to execute daily. We are good at automating releases and testing. Those are fundamentals of our systems and are necessary! We have to!

I don’t think we are any good at automation for self-healing or self-correction. Netflix’s infrastructure is too large for humans to monitor and out of necessity they implemented intelligent systems to monitor and apply corrections. Does this kind of intelligence always have to be born out of necessity? Why can’t it be baked into just the good practices philosophy? Are we afraid of losing control and losing our jobs? Or are we lazy in the mind?

The fact is that this type of automation will set you apart from the rest. Remember; Energy flows where the attention goes. Having these processes in place opens engineers up to focus on innovation and revenue opportunities or to pay back the debt (tech). You don’t have to be a large-scale company to achieve this.

Let 3rd party tools do the night shift instead of you looking at your email or teams/slack message every 2 hours. If these are properly configured they can do a lot more than you and best of all it never gets tired! We have lambdas, Azure function, python scripts, PowerShell, Apache Nifi, or any other robust task automation tool. There are multiple options and no excuses.

Administration interfaces

What if we need to invalidate client records and remove them from the system on request? What if we need to spot-fix a couple of records with incorrect information and it is too expensive to re-run the processes or to fix them at the source? Typically, a team would run a script directly injected into the database. Is there a review process? Is the script optimized enough if it needs to delete thousands of records from the database?

Will this script content for resources on the database server during business hours or after hours? Is this process secure? With some planning upfront and thinking about these scenarios, engineering teams can create APIs and administration UI systems to capture these one-percenters. APIs are effective to use if UIs aren’t available and with proper authorization and authentication, non-technical people can use these too.

These problems are those one-percenters but they tend to take so much time and effort to mitigate and remedy. It would be prudent to develop these mitigation steps during the development phase to administer these difficult requests. Every microservice or system must have some administration component for the data it produces.

These administration functions both UI or API endpoints that can be used by non-engineers and typically will be safe to use removing expensive time and effort from engineers to execute these processes or scripts. It creates flexibility and confidence in the system.

Conclusion

It is important to include this documentation, administration, and automation during the design and development stages. Create awareness and be diligent about it from the start. The price you pay upfront will be relatively small to the huge price you will pay later.

It is not only about making life easier for you or your team. Be different and make it easier for everyone supporting the system down the line. The lifespan of a good system can easily last 5-10 years. There will be many people and engineers responsible for it over that period.

I would love to hear your thoughts…

How to empower teams to better support software systems?