Saturday, September 25, 2021

Search for the "God Protocol"

 Let me start with a little bit of Trivia! The story goes that Nobel Prize-winning physicist Leon Lederman referred to the Higgs as the "Goddamn Particle." The nickname was meant to poke fun at how difficult it was to detect the particle.  Once the particle was discovered after many decades of research, the phrase "God Particle" became the headlines printed in bold by most media outlets, across the world and the term stuck. The "God Particle" referred to, was the "Higgs Boson” discovered then. Analogous to this pathbreaking moment in the history of human scientific evolution, another similar pathbreaking moment is happening in the Blockchain world and I refer to it as the “God Protocol”.   It is equally disruptive and powerful enough to potentially change the way the world conducts business in decades to come and is the topic of discussion in this article. 

 

Birth of the Bitcoin, Blockchain and the Consensus Mechanism:-  

Satoshi Nakamoto (pseudo name) released a paper in 2008, just around the time the Lehman crisis unfolded, which specified the foundations for a peer-to-peer (P2P), decentralized, self-running, self-sustaining technology system for facilitating electronic transactions. The beauty of this was that any two entities unknown to each other could transact, without relying on any “intermediate or central trust authority".  Think of it;  It was an open-source technology released in 2008 to the outside world, akin to an abandoned child after birth, but slowly but surely looked after by an increasing number of peer to peer network participants, as the guardian, the abandoned child has not only survived but has grown into a toddler with the promise of immense potential for the future.  The magical potion if you will, which sustained this motherless abandoned child, is the –“Consensus Protocol” which allowed unknown network entities to transact and evolve in an orderly manner, without any single trusted central parent or central body. Among the many other components of the Protocol, the “Consensus mechanism” is the central, truly disruptive component.

 

 

What Is a Consensus Mechanism Protocol? 

A Consensus Mechanism or Consensus Protocol or Consensus Algorithm is a method through which all network participants / Peer2Peer nodes reach an agreement on the creation of the next block in the blockchain, made up of a collection of transactions.  The consensus mechanism also additionally enables governance of the “state of the network”, including its upgrades & enhancements to enable continuous evolution of the network to meet the ever-changing needs of the network users. The consensus mechanism ensures that each new block added to the blockchain is one and the only truth. Thus the consensus protocol enables to achieve reliability, collaboration, cooperation, fair play to establish Security & Trust between unknown peer network entities.  The consensus mechanism is the "God" in the proverb “In God, We Trust” for all the blockchain network participants, thus is referred to here as the “God Protocol”.

 

Santoshi Nakamoto authored a type of consensus mechanism called the “Proof of Work” in his legendary white paper. Since then there are hundreds of communities working furiously to continue to enhance and evolve the algorithms.   All communities strive to find a perfect solution to the Most common problem statement also called the Blockchain Trilemma.  The trilemma is like a 3 legged stool, each leg being 

  • Security (no single node or group of nodes can collude to insert fake transactions)
  • Speed (elapsed time for a transaction should be fast enough, as compared to centralized systems of today like Visa/Mastercard/Stock exchange, etc.)
  • Decentralization (the most important driver for blockchain is that it is not controlled by a single entity/state/private body). 
The consensus algorithm has to ensure that none of these are sacrificed in the interest of the other. 

We will discuss a few examples of various consensus protocol algorithms.

Proof of Work (PoW) is a common consensus algorithm used by the original Bitcoin blockchain and the current Ethereum 1.0 Blockchain network.  In this consensus mechanism, all participating Mining network nodes start creating the next block, but only one winner is selected to author/mine the next block.  All mining nodes are required to solve a complex mathematical puzzle, thus demonstrating proof of work, but the mining node which solves the puzzle first emerges as the winner and earns the right to add the next block to the blockchain.  This mechanism suffers from high energy consumption and longer processing times. 

Proof of Stake (PoS) is another most commonly used mechanism, where the mining participant nodes deposit Coins or Stakes demonstrating their skin in the game. Nodes share the responsibility to mine the next block and are rewarded based on their contribution stakes. Any misbehaving node found violating during the validation process is in some form disincentivized, thus maintaining the sanctity of the blockchain network.  This type of consensus mechanism uses far less energy than PoW,  also increases Speed, but introduces some other drawbacks like coin hoarding, etc.  #Ethereum 2.0 will migrate from PoW to PoS with the upcoming upgrade. The #Cardano blockchain is also based on a variation of PoS.  

Delegated Proof of stake (DPoS) is another variation, in which voting-based consensus protocol is used to elect a board and the board members have additional rights to mine the block. 

Several alternative mechanisms have emerged. Mostly categorized as 

1.   Consensus protocols based on Effort of Work.

2.   Consensus protocols based on the Amount of Resources.

3.   Consensus protocols based on Importance, past behavior, or Reputation. 


Some examples are:- 

Proof of Capacity (PoC) enables sharing of memory or storage space of the contributing nodes on the network. The more memory or storage space a node has, the more its stakes and hence the responsibility and rewards.

Proof of History (PoH) is the one developed by the #Solana network project, and is most complex to understand and explain!. 

Proof of Burn (PoB) requires participant nodes to demonstrate their skin-in-the-game by burning coins and taking a short-term loss for a future gain. 

Proof of Elapsed Time (PoET): Originally invented by #Intel, it kind of resembles the Local Area Network Media Access CSMA/CD protocol, whereby the nodes start a random wait timer, and the one with the least wait time emerges as the winner to author the next block.  This needs some specialized hardware at each node to encode the passage of time cryptographically.

Proof of Authority (PoA): #VeChain the most popular blockchain used for authenticating the supply chain records, uses this protocol. Here as the name suggests specific master nodes are determined by the governing community. 

 "Gossip protocol": You read it right!. Each node transmits information that it has learned to neighbor nodes, and an overall gossip graph is constructed. To me, this closely resembles the RIP protocol used by early-stage TCP/IP networks, whereby the nodes learn from each other and transmit what they have learned to other neighbors.  The #HederaHashgraph blockchain uses this protocol. 

If you have read this far, you would have realized that there is an ongoing race amongst technologists, university academia, and crypto venture backers to research the next Ideal Blockchain Consensus Protocol, which is energy efficient, accomplishes Scale, Security and Decentralisation.  

I will end this post by summarising, that several variations of the "God Protocol' exists today, albeit in an early stage of usage, but as they say in Philosophy; The search for eternal truth always remains.

Saturday, June 26, 2021

The Phoenix Project. Key takeaways and musings.

 

To enable fast and predictable lead times in any value stream, there is usually a relentless focus on creating a smooth and even flow of work, using techniques such as small containers, optimum size of inventory stored at each work centre, reducing work in process (WIP), preventing rework to ensure we don’t pass defects to downstream work centres. This is the most used concept in any modern manufacturing system. 

 

The same principles and patterns that enable the fast flow of work in manufacturing setup are equally applicable to the Technology world. The only difference is that in Technology world the work is invisible (as in code, bits and bytes, applications stored in computer systems). In DevOps, we typically define our technology value stream as the process required to convert a business objectives/business requirements into a technology solutions deployed on production environments to enable a service that delivers value to the customer.

 


 

Whether one agrees or not with the DevOps methodology, This Book (The Phoenix Project by Gene Kim, Kevin Behr and George Spafford)  written in a form of a story on a fictional company provides a framework  in a simplistic  generally understandable format.  As you read there are many moments which will make you pause, reflect, connect the characters and situations to your workplace characters.  They key takeaway is the “3-Ways”. The First Way is to increase the flow of work from left to right of the value stream, or from business requirements to Operations in the IT world. The Second Way is to generate consistent, fast feedback loops, amplify the feedback to help create quality at each step, and catch defects early on in the value stream. The Third Way is to build a culture of shared objectives and continuous learning.  The author develops the theme from the key takeaways from the seminal book “Theory of Constraints(TOC)” by Eliyahu Goldratt and also generously refers to the Kanban system from the “Toyota Production System(TPS)”. 

 

How do we manage Constraints?

1.     Identify the constraint.  This is the key step. In the production work floor, work components are visible, however in the technology world, work is invisible and hence WIP (Work in Process) and constraints will get unnoticed if you don’t have a system of status notifications, review cadence etc. 

2.     Exploit the Constraint for maximising effectiveness. Prioritise work at Constraint.

3.     Subordinate to the Constraint.  Explore workarounds, alternate workflows.

4.     Elevate the Constraint.  Ensure that everyone else supports and helps the Constraint in a way to ensure optimal work done by the Constraint. 

5.     Institutionalise learnings & Continue to look at these steps repeatedly. 

 

Discussing a few observations to help churn your mind and generate Point of Views. 

 

1.     Brent the character which epitomises Constraint in an the IT Operations workflow, is brilliant, smart and subject matter expert in many domains. We all can relate to similar characters in our workplace. In this fiction, Brent is helpful, always eager to help without any ego or expectation of return. What if Brent was arrogant and pushes back, or worse prioritises help requests based on “What’s in it for Me” syndrome? This typically generates power centres in parallel to work centres.   Think of ways to handle this type of constraint.  Is he the constraint?  What can be done?

2.      What if Brent is an Information Hoarder, to brand himself the indispensable hero?. Is his brilliance an asset or has it become a liability to the optimum work flow?. 

3.     How do we scale Brent.?

a.     Let Brent coach and guide other people instead of doing it himself. This system of mentees if you will may be help scale the “constrained work centre” and reduce the constraint. 

b.     Prioritising work requests to Brent to ensure that only critical work gets assigned and no other interrupts.

4.     Do you find the DevOps flavour in the old Indian system of “Jugaad”. Mostly found in medium scale enterprises with lean workforce by design more than by choice. The workforce takes on multiple roles simultaneously and adopts a very iterative process of development, deployment and operations.

5.     As a Leader, how would you feel to be a Constraint? What steps should a Leader do to scale.

 

Some more musings using analogy to compare and contrast  the “3 Ways” with the various Communication Protocols & Systems evolved over time.  

If I were to apply the “3 Ways” principles to the evolution of the data communication industry, some fascinating thoughts emerged.  In the early days of IBM-SNA, Mainframe to terminal communication protocols, X.25/Frame relay communication protocols, there were mechanisms of implicit & explicit flow control signals at each step of the data flow.  The end to end data flow was consistent, predictable, with negligible wastage of information blocks, although the data block chunk was small compared to networks today. This is similar in a way to the  manufacturing floor work flow from each work-centre to work-centre using Kanban cards to signal.  This ensured that there was no single work-centre or hop becoming a Constraint to the flow.   Over time emerged the IP Networking protocols and different options of flow control emerged with option of explicit signalling of congestion and end to end flow control  (as in TCP window size control between source and destination).  A more implicit mechanism is preferred with dropping of packets to indicate the sender to slow down (Yes, you heard it right. Dropping of data packets, which could be considered as wastage in manufacturing systems). A whole school of thought and bodies of deep research emerged over the years proposing options of regulating data flow across networks to reduce data packets wastage, improve predictable latency, and self-healing upon detection of a constraint in any intermediate node.  Many protocol white papers/RFC’s were written. Fascinating to think, that this in concept is very similar to the studies done by Toyota Production Systems & TOC striving to create optimum work flow of components and products, reducing wastage with consistent quality in the manufacturing floor.  Amazing to see that concepts can be unique to an industry domain, but still can be applied across industries and setups, you just need to refine it for the target environment. 


The references in the book with the manufacturing flow principles borrowed from the TPS and TOC, revived nostalgic memories from my early days of production supervisor apprentice in Siemens, India.  The manufacturing floor layout was optimised for the flow of product (Induction Motors and Switchgear Control Panels) as it gets built from scratch to final assembly. The system of final integration test and occasional pile up of completed products at the Systems Test QA work centre some times causing the entire work flow to stop are the some of the often recurring problems in Manufacturing which has been  since then solved with seminal bodies of work like TPS and TOC. The same principles are now rehashed, refined to match  the new environment of the Technology value stream and given a new name called DevOps principles. 

Monday, May 3, 2021

Is CDMO wave the next déjà vu moment for India? Like the 90’s wave of ODM model in electronics product engineering services




Let me start with a disclaimer, I am not an expert in Pharma or Biologics, however, have been keenly observing the CDMO movement taking shape in the country and been pursued by leading Pharma and Biologics players. I would like to draw parallels with the ODM model in electronics design and manufacturing wave in early 80’s and 90’s. 

I was deeply involved with the ODM business model which was flourishing by the 90’s and Taiwan emerged as a Leader in the Original Design Manufacturing (ODM) industry. (even in 2020 it still leads with largest share of ODM and OEM electronics design and manufacturing global exports). What struck me is that there are many similarities in the ODM and the CDMO business models and it would be good to be aware of the same and the lessons from it. The erstwhile leading Indian IT services companies and associated Investors totally got blindsided and missed the ODM bus as most of them were basking in the glory and growth of the HC led software/application coding and maintenance service model.

My stint at Tata Elxsi and Sasken in early 2000, gave me an opportunity to observe the ODM success model closely with my many visits to Taiwan and associations with the small and large ODM’s there, including participation in Computex- an Global Ecosystem annual event at Taipei. We did embark on the journey to emulate the ODM model here in India, but with limited success and therein lies the lessons learnt. The brands like BenQ, Foxcon, Westron are some of the well-known names and all have their origins for success in the ODM model. The critical ingredients for success were all there in Taiwan and coupled with the cultural and geographic proximity to China (in the 90’s they were less politically apart), they all established OEM (manufacturing bases in China) and together the ODM+OEM combo leading players still rule the global electronics design and manufacturing supply.

Many constituents need to come together to form an ecosystem for the ODM model success. Integrated Circuits makers – both Analog and Digital, Multi-layer precision PCB fabricators, Electronic components suppliers, LED display makers, electro-mechanical sub assembly makers and Electronic Test automation tools and experts and Design experts. Most importantly the ecosystem proximity enabled a tight collaborative working relationships leading to agility and adaptability to respond to customer requirements. Obviously, India never had any major success in Integrated Circuits/microprocessors/memory manufacturers, high density and precision PCB fabricators, and design services providers who can quickly convert a design to prototype. Hence in the wave-1 of ODM model, we could never gain market share. Taiwan, China, Thailand, Vietnam are still ahead of us.

Just like the ODM ecosystem, the Pharma CDMO also is about Custom synthesis for Pharma major customers and Biologics start up players who would prefer the entire lifecycle of molecule synthesis, integration, prototype, trials and manufacture at scale services to be delivered by a reliable, trusted CDMO partner. The main difference as compared the earlier ODM wave, is that all the ecosystem constituents for the success of CDMO business model, are already present in good measure and the Specialty chemicals manufacturing including API’s is being scaled in the country. Further geographic Chemical industrial clusters allow easy collaborative working relationships amongst them. Good news is that India has all the above constituents with world class companies, with cutting edge manufacturing processes and economies of scale. Adding to this, we have an ecosystem of Chemistry, Pharma and Biology scientists, all of which is coming together with aspirational managements with vision.

 This makes me feel confident that in this wave of CDMO, India will be far more successful than the earlier ODM wave and make a name for itself with significant market share.. Huge opportunities exist for creative minds and investors alike.