Saturday, September 25, 2021

Search for the "God Protocol"

 Let me start with a little bit of Trivia! The story goes that Nobel Prize-winning physicist Leon Lederman referred to the Higgs as the "Goddamn Particle." The nickname was meant to poke fun at how difficult it was to detect the particle.  Once the particle was discovered after many decades of research, the phrase "God Particle" became the headlines printed in bold by most media outlets, across the world and the term stuck. The "God Particle" referred to, was the "Higgs Boson” discovered then. Analogous to this pathbreaking moment in the history of human scientific evolution, another similar pathbreaking moment is happening in the Blockchain world and I refer to it as the “God Protocol”.   It is equally disruptive and powerful enough to potentially change the way the world conducts business in decades to come and is the topic of discussion in this article. 

 

Birth of the Bitcoin, Blockchain and the Consensus Mechanism:-  

Satoshi Nakamoto (pseudo name) released a paper in 2008, just around the time the Lehman crisis unfolded, which specified the foundations for a peer-to-peer (P2P), decentralized, self-running, self-sustaining technology system for facilitating electronic transactions. The beauty of this was that any two entities unknown to each other could transact, without relying on any “intermediate or central trust authority".  Think of it;  It was an open-source technology released in 2008 to the outside world, akin to an abandoned child after birth, but slowly but surely looked after by an increasing number of peer to peer network participants, as the guardian, the abandoned child has not only survived but has grown into a toddler with the promise of immense potential for the future.  The magical potion if you will, which sustained this motherless abandoned child, is the –“Consensus Protocol” which allowed unknown network entities to transact and evolve in an orderly manner, without any single trusted central parent or central body. Among the many other components of the Protocol, the “Consensus mechanism” is the central, truly disruptive component.

 

 

What Is a Consensus Mechanism Protocol? 

A Consensus Mechanism or Consensus Protocol or Consensus Algorithm is a method through which all network participants / Peer2Peer nodes reach an agreement on the creation of the next block in the blockchain, made up of a collection of transactions.  The consensus mechanism also additionally enables governance of the “state of the network”, including its upgrades & enhancements to enable continuous evolution of the network to meet the ever-changing needs of the network users. The consensus mechanism ensures that each new block added to the blockchain is one and the only truth. Thus the consensus protocol enables to achieve reliability, collaboration, cooperation, fair play to establish Security & Trust between unknown peer network entities.  The consensus mechanism is the "God" in the proverb “In God, We Trust” for all the blockchain network participants, thus is referred to here as the “God Protocol”.

 

Santoshi Nakamoto authored a type of consensus mechanism called the “Proof of Work” in his legendary white paper. Since then there are hundreds of communities working furiously to continue to enhance and evolve the algorithms.   All communities strive to find a perfect solution to the Most common problem statement also called the Blockchain Trilemma.  The trilemma is like a 3 legged stool, each leg being 

  • Security (no single node or group of nodes can collude to insert fake transactions)
  • Speed (elapsed time for a transaction should be fast enough, as compared to centralized systems of today like Visa/Mastercard/Stock exchange, etc.)
  • Decentralization (the most important driver for blockchain is that it is not controlled by a single entity/state/private body). 
The consensus algorithm has to ensure that none of these are sacrificed in the interest of the other. 

We will discuss a few examples of various consensus protocol algorithms.

Proof of Work (PoW) is a common consensus algorithm used by the original Bitcoin blockchain and the current Ethereum 1.0 Blockchain network.  In this consensus mechanism, all participating Mining network nodes start creating the next block, but only one winner is selected to author/mine the next block.  All mining nodes are required to solve a complex mathematical puzzle, thus demonstrating proof of work, but the mining node which solves the puzzle first emerges as the winner and earns the right to add the next block to the blockchain.  This mechanism suffers from high energy consumption and longer processing times. 

Proof of Stake (PoS) is another most commonly used mechanism, where the mining participant nodes deposit Coins or Stakes demonstrating their skin in the game. Nodes share the responsibility to mine the next block and are rewarded based on their contribution stakes. Any misbehaving node found violating during the validation process is in some form disincentivized, thus maintaining the sanctity of the blockchain network.  This type of consensus mechanism uses far less energy than PoW,  also increases Speed, but introduces some other drawbacks like coin hoarding, etc.  #Ethereum 2.0 will migrate from PoW to PoS with the upcoming upgrade. The #Cardano blockchain is also based on a variation of PoS.  

Delegated Proof of stake (DPoS) is another variation, in which voting-based consensus protocol is used to elect a board and the board members have additional rights to mine the block. 

Several alternative mechanisms have emerged. Mostly categorized as 

1.   Consensus protocols based on Effort of Work.

2.   Consensus protocols based on the Amount of Resources.

3.   Consensus protocols based on Importance, past behavior, or Reputation. 


Some examples are:- 

Proof of Capacity (PoC) enables sharing of memory or storage space of the contributing nodes on the network. The more memory or storage space a node has, the more its stakes and hence the responsibility and rewards.

Proof of History (PoH) is the one developed by the #Solana network project, and is most complex to understand and explain!. 

Proof of Burn (PoB) requires participant nodes to demonstrate their skin-in-the-game by burning coins and taking a short-term loss for a future gain. 

Proof of Elapsed Time (PoET): Originally invented by #Intel, it kind of resembles the Local Area Network Media Access CSMA/CD protocol, whereby the nodes start a random wait timer, and the one with the least wait time emerges as the winner to author the next block.  This needs some specialized hardware at each node to encode the passage of time cryptographically.

Proof of Authority (PoA): #VeChain the most popular blockchain used for authenticating the supply chain records, uses this protocol. Here as the name suggests specific master nodes are determined by the governing community. 

 "Gossip protocol": You read it right!. Each node transmits information that it has learned to neighbor nodes, and an overall gossip graph is constructed. To me, this closely resembles the RIP protocol used by early-stage TCP/IP networks, whereby the nodes learn from each other and transmit what they have learned to other neighbors.  The #HederaHashgraph blockchain uses this protocol. 

If you have read this far, you would have realized that there is an ongoing race amongst technologists, university academia, and crypto venture backers to research the next Ideal Blockchain Consensus Protocol, which is energy efficient, accomplishes Scale, Security and Decentralisation.  

I will end this post by summarising, that several variations of the "God Protocol' exists today, albeit in an early stage of usage, but as they say in Philosophy; The search for eternal truth always remains.

Saturday, June 26, 2021

The Phoenix Project. Key takeaways and musings.

 

To enable fast and predictable lead times in any value stream, there is usually a relentless focus on creating a smooth and even flow of work, using techniques such as small containers, optimum size of inventory stored at each work centre, reducing work in process (WIP), preventing rework to ensure we don’t pass defects to downstream work centres. This is the most used concept in any modern manufacturing system. 

 

The same principles and patterns that enable the fast flow of work in manufacturing setup are equally applicable to the Technology world. The only difference is that in Technology world the work is invisible (as in code, bits and bytes, applications stored in computer systems). In DevOps, we typically define our technology value stream as the process required to convert a business objectives/business requirements into a technology solutions deployed on production environments to enable a service that delivers value to the customer.

 


 

Whether one agrees or not with the DevOps methodology, This Book (The Phoenix Project by Gene Kim, Kevin Behr and George Spafford)  written in a form of a story on a fictional company provides a framework  in a simplistic  generally understandable format.  As you read there are many moments which will make you pause, reflect, connect the characters and situations to your workplace characters.  They key takeaway is the “3-Ways”. The First Way is to increase the flow of work from left to right of the value stream, or from business requirements to Operations in the IT world. The Second Way is to generate consistent, fast feedback loops, amplify the feedback to help create quality at each step, and catch defects early on in the value stream. The Third Way is to build a culture of shared objectives and continuous learning.  The author develops the theme from the key takeaways from the seminal book “Theory of Constraints(TOC)” by Eliyahu Goldratt and also generously refers to the Kanban system from the “Toyota Production System(TPS)”. 

 

How do we manage Constraints?

1.     Identify the constraint.  This is the key step. In the production work floor, work components are visible, however in the technology world, work is invisible and hence WIP (Work in Process) and constraints will get unnoticed if you don’t have a system of status notifications, review cadence etc. 

2.     Exploit the Constraint for maximising effectiveness. Prioritise work at Constraint.

3.     Subordinate to the Constraint.  Explore workarounds, alternate workflows.

4.     Elevate the Constraint.  Ensure that everyone else supports and helps the Constraint in a way to ensure optimal work done by the Constraint. 

5.     Institutionalise learnings & Continue to look at these steps repeatedly. 

 

Discussing a few observations to help churn your mind and generate Point of Views. 

 

1.     Brent the character which epitomises Constraint in an the IT Operations workflow, is brilliant, smart and subject matter expert in many domains. We all can relate to similar characters in our workplace. In this fiction, Brent is helpful, always eager to help without any ego or expectation of return. What if Brent was arrogant and pushes back, or worse prioritises help requests based on “What’s in it for Me” syndrome? This typically generates power centres in parallel to work centres.   Think of ways to handle this type of constraint.  Is he the constraint?  What can be done?

2.      What if Brent is an Information Hoarder, to brand himself the indispensable hero?. Is his brilliance an asset or has it become a liability to the optimum work flow?. 

3.     How do we scale Brent.?

a.     Let Brent coach and guide other people instead of doing it himself. This system of mentees if you will may be help scale the “constrained work centre” and reduce the constraint. 

b.     Prioritising work requests to Brent to ensure that only critical work gets assigned and no other interrupts.

4.     Do you find the DevOps flavour in the old Indian system of “Jugaad”. Mostly found in medium scale enterprises with lean workforce by design more than by choice. The workforce takes on multiple roles simultaneously and adopts a very iterative process of development, deployment and operations.

5.     As a Leader, how would you feel to be a Constraint? What steps should a Leader do to scale.

 

Some more musings using analogy to compare and contrast  the “3 Ways” with the various Communication Protocols & Systems evolved over time.  

If I were to apply the “3 Ways” principles to the evolution of the data communication industry, some fascinating thoughts emerged.  In the early days of IBM-SNA, Mainframe to terminal communication protocols, X.25/Frame relay communication protocols, there were mechanisms of implicit & explicit flow control signals at each step of the data flow.  The end to end data flow was consistent, predictable, with negligible wastage of information blocks, although the data block chunk was small compared to networks today. This is similar in a way to the  manufacturing floor work flow from each work-centre to work-centre using Kanban cards to signal.  This ensured that there was no single work-centre or hop becoming a Constraint to the flow.   Over time emerged the IP Networking protocols and different options of flow control emerged with option of explicit signalling of congestion and end to end flow control  (as in TCP window size control between source and destination).  A more implicit mechanism is preferred with dropping of packets to indicate the sender to slow down (Yes, you heard it right. Dropping of data packets, which could be considered as wastage in manufacturing systems). A whole school of thought and bodies of deep research emerged over the years proposing options of regulating data flow across networks to reduce data packets wastage, improve predictable latency, and self-healing upon detection of a constraint in any intermediate node.  Many protocol white papers/RFC’s were written. Fascinating to think, that this in concept is very similar to the studies done by Toyota Production Systems & TOC striving to create optimum work flow of components and products, reducing wastage with consistent quality in the manufacturing floor.  Amazing to see that concepts can be unique to an industry domain, but still can be applied across industries and setups, you just need to refine it for the target environment. 


The references in the book with the manufacturing flow principles borrowed from the TPS and TOC, revived nostalgic memories from my early days of production supervisor apprentice in Siemens, India.  The manufacturing floor layout was optimised for the flow of product (Induction Motors and Switchgear Control Panels) as it gets built from scratch to final assembly. The system of final integration test and occasional pile up of completed products at the Systems Test QA work centre some times causing the entire work flow to stop are the some of the often recurring problems in Manufacturing which has been  since then solved with seminal bodies of work like TPS and TOC. The same principles are now rehashed, refined to match  the new environment of the Technology value stream and given a new name called DevOps principles. 

Monday, May 3, 2021

Is CDMO wave the next déjà vu moment for India? Like the 90’s wave of ODM model in electronics product engineering services




Let me start with a disclaimer, I am not an expert in Pharma or Biologics, however, have been keenly observing the CDMO movement taking shape in the country and been pursued by leading Pharma and Biologics players. I would like to draw parallels with the ODM model in electronics design and manufacturing wave in early 80’s and 90’s. 

I was deeply involved with the ODM business model which was flourishing by the 90’s and Taiwan emerged as a Leader in the Original Design Manufacturing (ODM) industry. (even in 2020 it still leads with largest share of ODM and OEM electronics design and manufacturing global exports). What struck me is that there are many similarities in the ODM and the CDMO business models and it would be good to be aware of the same and the lessons from it. The erstwhile leading Indian IT services companies and associated Investors totally got blindsided and missed the ODM bus as most of them were basking in the glory and growth of the HC led software/application coding and maintenance service model.

My stint at Tata Elxsi and Sasken in early 2000, gave me an opportunity to observe the ODM success model closely with my many visits to Taiwan and associations with the small and large ODM’s there, including participation in Computex- an Global Ecosystem annual event at Taipei. We did embark on the journey to emulate the ODM model here in India, but with limited success and therein lies the lessons learnt. The brands like BenQ, Foxcon, Westron are some of the well-known names and all have their origins for success in the ODM model. The critical ingredients for success were all there in Taiwan and coupled with the cultural and geographic proximity to China (in the 90’s they were less politically apart), they all established OEM (manufacturing bases in China) and together the ODM+OEM combo leading players still rule the global electronics design and manufacturing supply.

Many constituents need to come together to form an ecosystem for the ODM model success. Integrated Circuits makers – both Analog and Digital, Multi-layer precision PCB fabricators, Electronic components suppliers, LED display makers, electro-mechanical sub assembly makers and Electronic Test automation tools and experts and Design experts. Most importantly the ecosystem proximity enabled a tight collaborative working relationships leading to agility and adaptability to respond to customer requirements. Obviously, India never had any major success in Integrated Circuits/microprocessors/memory manufacturers, high density and precision PCB fabricators, and design services providers who can quickly convert a design to prototype. Hence in the wave-1 of ODM model, we could never gain market share. Taiwan, China, Thailand, Vietnam are still ahead of us.

Just like the ODM ecosystem, the Pharma CDMO also is about Custom synthesis for Pharma major customers and Biologics start up players who would prefer the entire lifecycle of molecule synthesis, integration, prototype, trials and manufacture at scale services to be delivered by a reliable, trusted CDMO partner. The main difference as compared the earlier ODM wave, is that all the ecosystem constituents for the success of CDMO business model, are already present in good measure and the Specialty chemicals manufacturing including API’s is being scaled in the country. Further geographic Chemical industrial clusters allow easy collaborative working relationships amongst them. Good news is that India has all the above constituents with world class companies, with cutting edge manufacturing processes and economies of scale. Adding to this, we have an ecosystem of Chemistry, Pharma and Biology scientists, all of which is coming together with aspirational managements with vision.

 This makes me feel confident that in this wave of CDMO, India will be far more successful than the earlier ODM wave and make a name for itself with significant market share.. Huge opportunities exist for creative minds and investors alike.



 

 

 

 

 

 

 

Friday, December 28, 2012

Successful Technology Product Company in India – A Mirage. Root Causes from insightful Anecdotes.


Have you seen an Indian technology brand in the last decade? Do you see an Indian ICT product brand emerging anytime soon? The factual answer is No, Not Yet. Whatever successful instances exist, it is restricted to a dotcom or e-commerce portal. However there is a ray of hope as you do see glimpses of a complete product or module developed out of the India Development Center of some Multinationals, who have found the keys to it.  

There have been several erudite commentators who have written on this subject and provided deep insight on the root causes of lack of innovation in India.  However I would like to pen my thoughts and bring out a different perspective to this, gained through experiencing several (successful and failed) innovation initiatives in my career in big, small, Indian and Multinational ICT companies as well as entrepreneurial stints. With no intention to do death-by-analysis and prescribe change recipes, I just want to bubble-up specific deep-rooted issues through series of anecdotes and leave readers with food for thought.

Having been part of the emergence of the Indian ICT industry since late eighties and having participated in several transformation initiatives in the last two decades, have been lucky to come across several insightful situations. Will share some of these anecdotes to bubble up the root-cause issues.

Focus on Education Grades instead of application of learning:
During my tenure at the India Development Center of Motorola in late 90’s, after a stint at solution architecting and sales, I took on a role of technology Business Development. The role was to provide foresight on technical disruptions on the horizon and seed micro business plans to be ready for the disruptions when they do happen.   VoIP technology was one such disruption and had begun to evolve from the Lab to mainstream solutions.  Several technology standards were being developed to propel this evolution – namely H.323, SIP, MGCP etc.  As part of a team working on the plan my job was to explore applications of the same inside and outside the company. We managed to get several potential opportunities to the table, but often we ended up playing catch up with the rapidly evolving industry needs and failed to meet deadlines demanded.  In one of the program reviews, one of the senior leader commented -  “Its with the choice of talent we hire. They are often rank holders with immaculate academic track records and would not like to risk their career records with failures.” Typically technology standards implementation is straightforward, but applying it for applications has certain ambiguity in it. Attribute of handling ambiguity is essential. Appropriate talent mix is significant especially for projects with ambiguity.

Entrepreneurship & Innovation – Do they go together?
The answer is resounding NO.  You don't have to innovate to be an entrepreneur.   India has been the land of traders, be it before the East India days, during or after.  Starting a new trading outfit is entrepreneurial no doubt, but innovation is limited to business model innovations if any.  There are several large trading houses, which over a period of time have evolved into product companies – Tata’s with Titan, Indica, Nano are benchmarks, but we cant think of many names in the ICT industry.  Trading according to me is a play on cost arbitrage.  Be it in iron ore, raw silk, agriculture produce, or ICT talent.  The ICT industry has been caught up and held hostage to its own success that of decent profit (ROE/ROC) margin, low risk business model.  Taking cue from the success of IT services pioneer Infosys, several entrepreneurs set up shop to do more of the same. They hesitated to invest their capital into product innovation even if they wanted to, because it is deemed too risky by their investors who love predictable & low risk revenue streams.  Let me give two examples here –

One of most admirable role models in the 90’s for all made-in-India-product minded freaks was Rajiv Mody. He set up a technology start up which was later renamed as Sasken, with the motive to build solutions for the mobile and consumer industry to create global licensing revenue streams.  I happened to bump into him during my many visits to Taiwan Computex, which is like the CES of the East.  I was lucky to get some face-to-face time with my “role model” and heard his views as we discussed various challenges of product creation, licensing, non-linear revenue model, lumpy revenue streams etc.  I vividly remember the comment he made thereafter - “How will we explain these challenges of product development to the investors”.  I was so enamored that I did work for him for a while.  Nowadays, I understand that post the listing of the company on the bourses, the incessant peer pressure unleashed by stakeholder community, there was no other option left but to shift towards low risk services only model.

Another recent example in 2010-11 was that of MindTree’s acquisition and then hiving off the Kyocera mobile phone development unit.   Kyocera unit in Bangalore had a concentration of highly talented and passionate designers and was the closest example of the ODM model (Original Design for Manufacture). The ODM model was invented in Taiwan (or at least was polished to an art form in Taiwan and created a platform for success of Taiwanese industry).  Several seasoned designers after a successful stints at a MNC leave and set up a small design house by forming a team of like-minded designers with complimentary skills. The proximity to manufacturing units and vibrant semiconductor industry provided all the ingredients to succeed. The design houses create designs of Consumer products and license them to big companies to brand, manufacture and sell for a fixed and variable per unit fee.  MindTree (like Sasken) decided to pursue the ODM path, but soon realized that one needs deep pockets and years of patience to be able to create a blockbuster success. The quarterly profit margins started to drop, investors started to point fingers at the management.  The management had to reverse its decision and dropped the risky model.  It instead chose to redeploy the high-caliber and passionate talent towards services industry.  Several analysts talked highly of the managements “turnaround” capabilities.  My way of looking at it was that one more innovation initiative was nipped and the country may have lost an opportunity, but the entrepreneur gained and its share price zoomed.

Focus on the business of Value Addition
There is nothing wrong in the IT Services model.  It is a proven model and has helped uplift a sizeable population in India and actually may be well suited to India given our demographics. IT Services is all about acquiring raw talent, polishing it and deploying it for delivering outcomes. In the services industry time and material business model, the Client lays out the outcomes and carries the risk.  I remember an incident during my days at Tata Elxsi, were I was running an initiative to create a hybrid revenue model of component licensing and customization services.  The annual post AGM dinner was customary and it gave us as managers (accompanied by family) to mix with board members and learn from their wisdom.  F C Kohli (the father of Indian IT services industry) was the chairman of the board and was lucky to get an occasion to interact with him. We had this grudge that to hire high-caliber talent we have to pay at higher end of the industry benchmark and asked him his opinion on it. His response was enlightening  - We are in the business of value addition. The investors look at how efficient we hire raw talent, polish it and the revenue earned by deploying it. The difference between cost of acquisition and deployment is the economic value addition of our business. We are not in the business of showcasing top talent or showcasing top line revenue.  I was lucky to have more occasions to interact and also learn from him on the ethos of product innovation.    The question to ponder was what would make a product innovation business model more compelling then a risk free services model.  One of the ways to create compelling value and lower risk is to create a product solution for the Indian market, which offers easy access to customers and large enough to justify any business plan.

A little but of nudge from the establishment
CDOT was a great example of talent when brought together with leadership and purpose can create wonders. The CDOT team had extremely talented engineers but more so bonded by a sense of purpose, instilled by Sam Pitroda's leadership  – to create a rural electronics exchange, which can sustain the vagaries of power & heat. Alas after great initial success it could not sustain. Compare this with China, which has institutions funded by government to create new standards and solutions for the country. The government even leads in tough negotiations with foreign suppliers to pursue (rather mandate) them to adhere to the local standards thereby giving a push for local innovations.  I must relate another example here to show how government bodies can support sustained local innovation.  During a visit to Taipei, we were pleasantly surprised to get a invite from the local trade body( probably they have an arrangement with  the hotel to send invites to business visitors).  Being curious we accepted and became their guest for half a day. One of the representatives was assigned to escort and guide us. She took us to the trade representative office and helped us discover the array of electronics manufacturers, components and solutions made by local vendors.  The whole experience left us with awe – on how a well planned effort can support local industry and build a brand for the country.

Lack of an Ecosystem
It’s not about the lack of funding and early stage investors. As a matter of fact there are hundreds of early stage investors in the country. It's a different matter that most of them may not bring in the deep knowledge in the area you need, but that's a different topic to discuss.  The ecosystem I am referring here must consist of hardware manufacturing, semiconductor design, supply chain infrastructure and easy access to markets. 

Probably an example will relate better.  In the early 2000’s post the dotcom bubble; the only industry that survived without too much of a dent was the consumer electronics industry.  Media consumption was evolving and all types of audio video players were being created at one end of the spectrum.  I was working on a business plan for creating ODM designs of A/V players including hardware design, encoder/decoder software. One of the challenges we came across was lack of vibrant high-tech manufacturing ecosystem  - prototyping a multilayer (>10layer) miniature PCB design was a challenge.  The same thing in Taiwan is akin to a household industry. Timely components’ sourcing was a challenge due to inefficient supply chain. A Go-to-Market strategy for any ODM is to create near final version prototypes and display it in global platforms like CES, Computex and likes. Our inability to create timely prototypes cost us many opportunities (adds to Business Plan time and cost factor). I am sure you will agree that we have not seen much change in the manufacturing industry and the challenges still exist.

Working in Silos
Solution development is all about getting multiple technology pieces to integrate together. Alas this is easier said then done.  Most services organizations in India are structured or siloed based on technology. It's a belief that same technology folks bond together but the fact is that its just easier for us managers to manage.  An anecdote would exemplify the challenges siloed structure places on solution development.

This is around 2002, we had bid and won an Global RFP to completely design a HD video end point from scratch from a US based product startup (name withheld for privacy). This was a first time a complete product design was outsourced and not just individual components. It was high point for us to win this. Apart from the CTO and small design team to guide all the rest was supposed to be done by the wining bidder.  The components involved were mind boggling – Chipset choice, Hardware, FPGA, Embedded OS, TCP stack and encoders/decoders. I was playing point during the bidding stage, to get the individual technology teams to work together as a single virtual team.  But one behavior I noticed – As we iterated and came closer to be declared as a winner, the virtual team bonded more and more. Winning makes team bind together. But alas the opposite also is true and poses the stiffest challenge to success. The task at hand was humongous, and at every design iteration step there were a set of learning’s. Deadlines began to slip as all estimation logic went awry due to so many assumed variables.  In the end after much effort overflow, the team did manage to bring out a decent prototype.  The learning was that virtual teams work, but a better approach would be, that for the duration of the critical multi technology project the team members work under a single leadership command with shared goals and sense of purpose.

Disjointed Globalization:  Product Management & Delivery organization

One of the main ingredients for the success of the product/innovation is the role of the Product Manager. The Product Manager is the owner of the product and defines the product/solution, the key success factors, the target user audience, the solution differentiators and Go-to-Market strategy. The way the Indian services model has evolved, it has focused far more in terms of process and maturity of the Delivery role, leaving the other functional roles of product development less developed.  We don't hear Product Managers as much as Delivery Managers. Conventionally the Product Management function has never been outsourced but retained by the Client organizations or HQ’s of MNC’s.  Another fallout is that the India Delivery organization is not in synch with the ethos of the product manager and hence is not able to bring in passion and flexibility to adapt to changing market conditions, although with collaboration best practices this divide is been bridged.   To me the lack of a mature product management function is one of the key impediments for innovation success.   One way to solve this is to encourage talent movement across functions – Product Development, Testing, Solutions Architect, Business Development and Sales and Marketing. Cross-functional experience is a must for successful grooming of product management, which in turn is a crucial factor for innovation to succeed.

Tuesday, January 18, 2011

CLOUD2.0 Vision

Assembling the CLOUD Puzzle pieces
CLOUD solution architecture has evolved over the last couple of years, and today is ready for mass adoption by enterprises. Essentially Virtualization is the key disruptive technology which enables the CLOUD as we know it today. VMWare, Citrix followed by others have pioneered the concept of Virtual Machines (VM), and played a key role in the de-facto standardization of the same to bring to this stage of enterprise adoption. VM is a block of portable data/software/registers, which can be planted on a given compute platform in almost real-time speed, or instantaneously. Thus compute capacity can be increased or decreased at will, by increasing or decreasing the number of VM’s. Another unique characteristic of the VM is that it is migratable across physical compute platforms within the same rack, or across neighboring racks, or across geographies. This unique virtue of VM Migratability provides the benefit of flexibility and elasticity to the CLOUD compute platform. The VM migration creates new challenges to Network I/O underneath – Network I/O should be able to support fast, low latency, high bandwidth interconnect between VM’s and VM’s and data.

Virtualization and Network I/O challenges
Not all enterprise applications have the same compute, and i/o requirements. Enterprise applications vary and exhibit unique characteristics interms of Compute requirements and data movement requirements. As an example a CLOUD Search Engine will run compute intensive search algorithm across many-many simultaneous VM instances (in most cases VM instances across geographies) and each one of them sorting data. A CLOUD Video services application will need the VM to move large amounts of data at real time rate from the storage to each subscriber. Data movement between VM's and storage is a significant factor to consider. The choice of the Network fabric underneath becomes significant. A highly optimized Switching fabric is needed to interconnect VM<->VM; VM<->Storage; VM<->Users. The underlying network must also provide end to end QoS on the packet data path. A highly optimized fast and low latency Network I/O fabric is another key Puzzle piece.

Storage in the CLOUD
Storage is a architecture to move blocks of data or files to & fro the Compute Platform to remote disks, via specialized interconnect links. SAN and NAS are two predominant architectures, both of which are protocols designed to operate over optimized network I/O. The SCSI was the earliest interconnect standard. SCSI has since then evolved to support faster and fatter I/O thru SCSI-over-Fiber Channel OR SCSI-over-Ethernet. Fiber Channel and Ethernet is the Physical layer on which the SCSI protocols are mapped. Ethernet is rapidly evolving from 1Gg to 10Gig and beyond, and with several vendors and standards body behind it to address its shortcomings; it is not surprising that historically Ethernet has proved to be the most preferred standards based Network I/O. In summary the Storage Interconnections is another part of the CLOUD solution puzzle.

CLOUD 1.0
The above pieces of the Puzzle have been around and are well researched and several vendors providing robust solutions for each puzzle part. CLOUD solution providers have integrated the individual solutions to form a CLOUD based application delivery model. This CLOUD Model can be called as the CLOUD1.0 Model. The focus of the CLOUD1.0 was on consolidation of resources to reduce computing, storage and power consumption costs. The model had early adopters in Enterprises to serve applications largely limited to the mobile workforce. It did not find mass adoption to deliver applications to workforce within the Enterprise walls. Several concerns and issues remained in the minds of CIO/Network Administrators, impeding the widespread adoption.

Vision of the CLOUD 2.0 Model
Several new approaches have emerged to enhance the CLOUD Model and make it more acceptable for mass adoption. Several solution providers have built solutions targeted to address the concerns of the CIO’s and the in the process created more pieces of the Puzzle which on integration, enables the evolution to the next-generation CLOUD Solution model, referred here as CLOUD 2.0. A rough illustration of the CLOUD Model 2.0 is shown here:-



Before going into the individual pieces of the Puzzle, let’s take a look at the advantages the CLOUD Model2.0 delivers and thus positions itself for wide spread adoption to deliver services for workforce inside the enterprise walls.
  • Enterprise grade High Speed Wireless Access Network, Controlled and Managed from the cloud. Further lowers the TCO of the network
  • Traditional Security services to secure popular applications like Email, WEB2.0 applications, Data Loss prevention tools – controlled and managed from Cloud. Mitigates CIO concerns and further lowers TCO.
  • Conventional WAN Acceleration solutions required active management by onsite Network Administrators. WAN acceleration services are needed on links between the Enterprise site and the Cloud. Cloud deployed WAN acceleration solutions with onsite client solutions are becoming available, which becomes one more part of the Puzzle. This further removes complexity of Network optimization management and further reduces Network TCO.
  • Several Cloud applications are becoming available from hosted application vendors. Most of these applications would need to be secured individually as simple Network security services are not enough. Data Loss Prevention (DLP) and corporate business policies have to be monitored for compliance on all traffic to/from the Enterprise and the Cloud. Application Security and DLP solutions deployed and managed from the Cloud are becoming available, which becomes one more part of the Puzzle. This further reduces the complexity and TCO for the CIO and Network Administrator.

    CLOUD 2.0 Enterprise
    An enterprise adopting the CLOUD 2.0 model is illustrated below. It is important to note some significant changes which have made this movement. Let’s take a look at them.



CLOUD based Managed Secure “Wireless Network-on-Tap” Service
A typical enterprise today, has a wired switching at each enterprise site as a primary network access medium. Wireless access is fast becoming the preferred network access medium. The 802.11n WLAN standard makes a big difference, as it provides 200Mbps data transfer throughput and coming standards promise to ramp this up still higher. The technology of hosting a Wireless Network Control function on VM’s is a significant game changer. The enterprise sites have Radio Nodes (RN) installed on ceilings to provide Wireless Coverage and Capacity. A minimalistic Ethernet infrastructure exists on-site to connect the radio nodes. The configuration, control and management of the Radio Node is done by the hosted Wireless Controller in the CLOUD. The enterprise Network administrator has access to configure the Wireless Network Controller and apply appropriate network polices, user policies, QoS policies, filters etc. The Hosted Wireless LAN Controller approach enables the collapse of the on-site network infrastructure in the enterprise, without sacrificing on mobility, bandwidth and security, and reduces the cost of delivering network access.
With respect to securing the Wireless Network infrastructure, standards have evolved making the WLAN access as secure as the wired network access, and coupled with specialized Wireless Intrusion Detection Systems can detect and also prevent intrusions in real-time.
Several leading enterprises have begun to adopt the Wireless LAN as a primary network access medium.

The rise of Tablets and Notebooks
Gone are the days when Mobility applications were only meant for workers on the move, inside and outside the enterprise premises. Today Mobility has become an integral need for the entire workforce as the need for always-on-connectivity is the new paradigm. Android and Apple tablets are bringing in volumes which are driving investment to design newer, lighter, faster and longer battery life tablets by all vendors. In the market share game, Desktop gave way to Laptops, and Laptops to notebooks and now notebooks to Tablets. Substantial percentage of enterprise workforce in the near future will carry handheld computers of some kind as compared to tethered desktops. Enterprise centered around "always connected mobility handhelds" always connected on the Wireless Infrasructure is another significant disruptive game changer technology.

Enterprise Security services on the tap from CLOUD
Traditional means of securing the enterprise meant deploying Perimeter Firewall, Anti-Virus, Spam filters, Web URL filters, NAC systems, Data Loss Prevention systems. A new generation company; ZScaler has taken the hassle out of installing so many appliances, by providing security services on the tap from Cloud deployed servers. It acts like a giant Proxy in the Cloud and all the enterprise network administrator has to do is to point the browsers to the Cloud Proxy. It secures email content, blocks spam, detects WEB2.0 attacks, and more. The Hosted Enterprise Security services is going to be play a significant role in mitigating concerns of CIO’s and Network administrators in moving towards CLOUD 2.0 architecture.

Delivering on the CLOUD promise
The coming together of disruptive technologies described above namely: - “Secure Wireless Network on Tap”, “Enterprise Handheld computing platforms”, “Enterprise Security services on Tap”, “WAN Acceleration hosted services” together promise a new generation of CLOUD which is referred here as CLOUD 2.0. It presents an opportunity for enterprises to move towards an ALL CLOUD deployed enterprise information services, which helps in reducing TCO, improving power consumption budgets, lowering enterprise carbon footprint, enhancing application availability to the entire enterprise.


In the next series we will look at some ideas on how CLOUD2.0 can be architected to be deployed in Hybrid Cloud Model for further acceleration in adoption by SMB’s and Enterprises.

Tuesday, December 21, 2010

Hybrid Cloud Solutions, a catalyst for rapid adoption by SMB's

Hybrid Cloud Solutions a catalyst for rapid adoption by SMB’s

Most cloud computing adoption surveys from different consulting companies all point towards the fact that adoption rate of cloud solutions by SMB’s can be drastically improved. Some survey examples are:-

SMB Cloud Computing Adoption 2010, by Spiceworks: http://www.spiceworks.com/it-research/cloud-computing-adoption-2010/cloud-computing-security-concerns/

BT’s Enterprise Intelligence survey: http://www.computerweekly.com/Articles/2010/01/04/239799/CIOs-confused-about-cloud-computing-survey-reveals.htm

If you collate the reasons for non-adoption in near term, from all surveys, it presents some deep insights on behavioral psychology of CIO and IT administrators. The most common recurring reasons are:-

( a.) “….lack of control of information and assets…”,

(b.) “..unproven technology..” or in other words the solutions available are too confusing at the moment, and

(c.) “..Security concerns..”.

Let’s take a deeper look at these concerns voices and ideate solution options.

On deeper thought, the most common reasons stated above are not surprising as they are actually interconnected in some form like cause & effect. The lack of understanding of the cloud delivery model is leading to a perception that somebody else is taking over the controls. Here cloud service providers and equipment providers are also to be blamed to some degree, for not being able to educate and spread the right awareness of the cloud delivery model. The most common misinformation the IT administrator has about the cloud delivery model is that the IT assets he has deployed today would be moved or re-deployed with virtualization at the cloud data centers, and managed by the 3rd party cloud services provider. This misinformation is the root cause to create apprehensions. The IT Manager starts assuming that lack of assets to manage will lead to his becoming redundant and his shrinking team means shrinking influence on the network and applications architecture. This insecure feeling is the root cause of most respondent’s response to survey queries on cloud adoption. Surely the undoubtedly disruptive cloud technology and delivery model should not be held ransom to impediments like these. So then, what is the way to make adoption faster?

Hybrid Cloud model is the answer. Hybrid cloud can be understood as a mix of public & private cloud or internal and external cloud. It is a combined cloud environment consisting of multiple internal and/or external providers.

The private cloud or the internal cloud layer is an essential ingredient for the IT Managers and CIO’s to retain control or perception of control of their enterprise assets but still enjoy all the traditional advantages of the cloud technology. The primary objective of a private cloud layer is to provide enough and deep controls provided to the erstwhile IT Manager and CIO. Cloud solution providers have to build a modular layer around the solution, which is perceived to be residing at the enterprise site, and the controls of the same is with the IT Manager. For lack of a better term, let’s call this module the “internal resource layer”. Most cloud solution vendors today mainly focus on lowering TCO, by virtualization of computing, storage, multiple application instances, SaaS business model etc. This is absolutely essential ingredient of the solution, but this unwavering focus blinds them to lose focus away from the significance of the “internal resources layer”. The Internal Resources layer could become the most important layer, as it is the interface by which the IT Managers and CIO’s will feel of having something near to them, which they see and feel, using which they can control, monitor and manage their enterprise solutions. Product Managers should smartly define the packaging and release engineering, which can use a mix of public and private resources or internal and external resources.

Product Managers of cloud solutions have to give deep thought and articulate the “internal resources module” such a way that it almost replicates the kind of dashboard the IT managers are used to see on a day to day basis and are comfortable with. The internal resources module is more than just a web based remote dashboard with configurable parameters. It should simulate or a model a set of controls of data flow, policies, user database, and application management as close to legacy solutions. The control module should mimic the existing infrastructure and interpret the commands and pass them as appropriate to the underlying cloud infrastructure solution. This layer could be a smartly packages as onsite module with a mix of physical hardware and software. A properly designed and articulated solution will help in removing the apprehensions of the IT Manager and CIO in deciding to move to the cloud delivery model.

Additionally even in positioning and marketing communication, the Product Manager has to articulate a Hybrid Cloud Model story, to make it more amenable for adoption by SMB’s.

Will discuss more on how and what form the internal cloud resources can take shape in the next article.

Monday, October 4, 2010

Three Must-read, Life changing books on Dharma and Heaven



"The Difficulty of Being Good" – by Gurcharan Das
While most successful retirees fancy surfing the sands in an exotic locale, Gurcharan Das decides to study the Mahabharata and write the book titled “Difficulty of Being Good” a contemporary analysis of the Mahabharata. Although there are number of books on this epic, this book is different and refreshing and a must read for the current generation folks from all walks of life, including Corporate citizens and Businessmen. . The author reads thru each chapter of the epic, and in his unique style leverages real-world analogy in explaining the essence in simple English. The central theme is the description and the importance of following the Middle Path in this world, which is intricately mixed with right and wrong in a bewildering manner. To explain the central theme, the author explores Yudhisthira’s dilemma and the transformation from extreme idealism to pragmatic middle path as epic unfolds. The author combines his years of experience of understanding India at the grass root level to compare and juxtapose with events in the epic, and repeatedly infer the key messages and the meaning of “Dharma” throughout the epic.

Just being good does not bring about happiness and neither reserves a seat in Heaven, but then why does the epic extol followers to be Good and follow the path of Dharma. The book provides the answers. It’s fascinating to read the different interpretations of Dharma espoused by the main characters – Bhishma, Vidura, Yudhisthira and Krishna. Bhisma’s post war advice to the remorseful Yudhishthira is captured brilliantly to convey the meaning of Dharma, which is absolutely contemporary and relevant to today’s real-world.

If I get a chance to meet the author, I would love to understand his views on the Middle Path India should follow in its engagement with China and how does he view China in the comparison with Duryodhana in the context of one more analogy in the book.

"Five People you meet in Heaven" by Mitch Albom
The book "the five people you meet in heaven" by Mitch Albom fascinated me, and is not only enlightening but makes you reflect on your life and makes you live each moment pondering that you are answerable for it in your afterlife. The lead character dies and meets 5 people in his post death journey, 5 people who have crossed him in his real life. I particularly liked the part where he meets his father and develops a bond, which even though he was yearning, he could not accomplish in his real life. Think of the 5 meetings:) - like 5 gates you have to cross, or 5 examinations you have to pass, to attain entry in Heaven. Somewhere in the middle, I had this gut feeling that this is a perfect script for a family Hindi movie, which can have all ingredients to be a blockbuster. At various points the book leads you to reflect, and during once such moment, I thought of another book closely related to this topic - "Man's search for Meaning" by Viktor Frankl, which is a real life story of the prisoner in the Nazi concentration camp, and how he managed to survive, only because he had a "purpose" or "meaning" to live.


All the three books, each based on Legend, Fiction and a Real Life story are deeply moving and has helped me shape my thoughts. I would strongly recommend it to all, to understand the importance of the Middle Path, the importance of having a purpose, vision and mission in our lives.