Aug 2019
900 metres of banked corners, tunnels and dippers gravity fueled ride.
Deepak's Technology Blog
Aug 2019
900 metres of banked corners, tunnels and dippers gravity fueled ride.
Aug 2019
Ride through the narrow Shotover Canyons at over 85kph. Churning rapids and towering cliffs make the Shotover River a thrilling experience by jet boat, and this high-speed tour gets you spinning, fishtailing, and skimming across the surface.
This is an essay that I worked on for my 5th grade school assignment:
Think about the Earth in fifty years, or even one hundred, a fireball of a planet all because of climate change. It is our biggest challenge. Climate change will destroy our planet Earth, and we may be at the point of no return.
The primary cause of climate change is greenhouse gases that come from various sources like industry, vehicles, and agriculture. Animals like cows and sheep release large amounts of methane gas, which in turn negatively affects our climate.
Believe it or not, our planet is definitely changing, and it might not be for the better. In my opinion, climate change is ruining our planet. Let us consider the fact that our way of life is changing, and future generations will feel this ripple. That recycled water bottle a day is not really helping, and what we need is a radical plan. This should include reducing our current emissions, removing accumulated stock of CO2 from the past, and changing our way of life going forward.
When it comes to climate change, it has a wide array of effects and consequences. The incidents of extreme weather have increased i.e. hurricanes, tornadoes, heat waves, extreme cold polar vortexes, and droughts. Firstly, with climate models from scientists, they’ve predicted that extreme weather will become more recurrent. In fact, NASA research shows that since 1950, the U.S. has received increasing numbers of high rainfall events. In particular, average U.S. precipitation has increased since 1900, and some areas have had bigger increases than our national average. In other words, hurricanes have had increased frequency, intensity and duration since the late 1980s. Additionally, extreme droughts and heat will also be more common, occurring every 2 to 3 years instead of the usual 20 years. Thus, climate change affects our weather patterns and makes extreme weather cases more common.
One of the most prominent effects of climate change is sea level rise. Heed these following warnings: According to NASA, global sea level has risen 8 inches in the past century. This has also been proven by the NOAA (National Oceanic and Atmospheric Administration). Specifically, in 2014, the Global Sea Level was 2.6 inches above the average in 1993. This is dangerous because flooding has also become 300 to 900 percent more frequent in our coastal communities than 50 years ago. Yearly statistics are a different story; for instance, every year, the sea level rises ⅛ of an inch, and that can very well increase. Additionally, with the warming of our seas, sea level can rise even more. We can conclude that global sea level is on the rise and is becoming more and more frequent each year.
Everybody wastes gas or fuel, maybe by leaving the heat on for more than an hour, and others are also wasting gas by leaving their car idle, etc. The point is, our world is getting warmer and it is melting our ice, anywhere from the North pole to the South, or maybe even to Greenland. For instance, both National Geographic and NOAA weather stations and reports have shown that in 2016, temperatures have gone up 1.69 degrees from the twentieth century, making 2016 one of the hottest years. In particular, NASA has also concluded that Greenland has lost 286 billion tons of ice per year from 1993 to 2016. Specifically, they also show that Antarctica has lost 127 billion tons of ice in the same time period. To put all of this into perspective, Earth’s average surface temperature has risen about 1.62 degrees since the late 19th century. When you take these numbers into account, they may seem very little, but they have long-lasting effects which wither our planet even more. Our Earth is getting blasted with heat by
As we burn fuel and produce carbon dioxide, we all know our atmosphere is suffering. Lastly, our oceans are both acidifying and suffering, too. In the first place, according to PMEL, every year, the ocean absorbs a quarter of the CO2 we release into the atmosphere. Put simply, the more gas and fuel we are putting into the air, the more we are putting in our waters. Additionally, since the beginning of America’s Industrial Revolution till now, our oceans pH (acidity) levels have fallen by 0.1, which means a 30% increase in acidity. This proves that at the end of the century, ocean acidity levels will have increased by 150%. We depend on our fish and sea life for food, and ocean acidification affects both our food and sea life. In fact, ocean acidification has significantly reduced the ability for coral reefs to build their skeletons. For example, ocean acidification also brings the risk of compromising fertilization and the survival of other species. Also, there have been failures of developing oysters all around the West Coast, which is caused by ocean acidification. Our coral reefs, like the great coral reef in Australia, are disappearing because of ocean acidification.
We can all see that our planet is suffering and getting ravaged by mankind’s own hand, so we should work together to stop. Not just for our future, but for the future of Earth too. This is my generations’ call to action, and I believe we can stop this if we put our mind to it.
-Shaurya
This was my submission to the NY Times essay contest last week…
The past 50 years have been marked by a dramatic increase in population and productivity in developing nations around the globe. Here’s the catch: increased productivity has not translated to sustained economic development and a decrease in poverty. Why? Countries with corruption and poor legal infrastructures have not been able to keep track of assets and transactions, leading to the development of what Peruvian economist Hernando de Soto calls the “extralegal economy”. Simply put, the extralegal economy is all the assets that exist outside the sphere of government regulation.
Why is the extralegal economy bad? Extralegal assets do not produce tax revenues and cannot be leveraged for official transactions, such as a loan or a mortgage. If you own a piece of land that isn’t registered with the government, you cannot use it to gain credit or conduct more legitimate business. This is one of the key inhibitors of economic growth. According to de Soto, “…the majority of entrepreneurs are stuck in poverty, where their assets-adding up to more than US$10 trillion worldwide-languish as dead capital in the shadows of the law.” That is $10 trillion that can supercharge the world economy given the right framework for asset registration. That same $10 trillion could also generate tax revenue for infrastructure, further accelerating economic growth.
Legalizing the extralegal economy is no easy task. Imagine trying to keep track of millions of new assets and transactions in a secure manner. It is seemingly impossible. But that is starting to change. Blockchain, the mystical buzzword talked about by many in Silicon Valley, is a possible solution to this problem. A blockchain is nothing but a distributed, immutable database. Once a transaction is recorded on a blockchain, it cannot be changed. These transactions are recorded for all participants in the blockchain using a shared ledger, so each participant knows the complete list of assets and transactions. This is perfect for securing asset registration on a large scale, as blockchain uses cryptographic functions to prevent tampering of the ledger. In fact, as outlined by Adrianne Jeffries in the NY Times, it is already being looked into by US government agencies as a way to store and protect records. Blockchain technologies are a possible remedy to the economic woes of the developing world.
Freeing vast amounts of dead capital will require significant institutional overhaul. This issue lies at an intersection of government, business, and technology challenges which will be difficult to navigate. However, I believe that blockchain, combined with efficient governance, provides the best way forward in truly unleashing the economic power of these countries.
-Sid
5/3/19
This is pretty disturbing – the Barium group is able to hack into supply chains with ease an infect a very large number of users.
4/11/19
While I understand why we need humans to weed through pictures to tag objects for training algorithms on image recognition; this latest from Amazon may be going too far for what I am assuming is NLP algos. Folks need to be concerned about privacy and ownership of data before jumping all in towards more intelligent algorithms. No wonder I will never agree to get a voice assistant at home (even if offered for free) unless it’s built by me and all it’s data is under my control.
4/5/19
Good read on EverCrypt that mathematically assures that its cryptographic tools cannot be hacked either through coding errors, buffer overflows or other side channel attacks.
2/26/19
Another report of Supermicro hardware weaknesses leading to compromised bare metal servers between leases for cloud customers.
2/25/2019
Encouraging story on an Accenture initiative to use blockchain to maintain complete supply chain transparency all the way to the end consumer on how sustainably/ethically a product is produced; thus allowing us to reward behaviors that we support and punish those that we want to discourage using our wallets.
2/23/2019
Absolutely heartbreaking CNN story of what we do to people in the name of Capitalism. What I do not understand is – how does approval for Firdapse allow for Catalyst pharmaceuticals to stop the production of 3,4-DAP by Jacobus Pharmaceutical for existing patients? Catalyst is dumping the existing users of the older drug to Charities, Endowments and other mechanisms to bear their new list price at $375,000/year – how can our legal frameworks even allow that and send someone to a slow inevitable decline/death? What if, for some macro economic/socio-political reason there wasn’t enough $s in charities & foundations to support the LEMS community? Why does a capitalistic company get to use charity to support its business practices? Have we lost our humanity?
2/13/2019
Martin Wolf on cryptocurrencies in the FT
Sounds promising: New study from MIT that we could afford to extract CO2 out of the atmosphere after all in a cost effective way.
2/12/2019
A colleague pointed out a couple of very interesting articles regarding Trust, well worth a read:
Reflections on Trusting Trust: Ken Thompson from August 1984
Schneier on Blockchain and Trust
2/4/2019
New threat to personal information from the Iranian cyber espionage group APT39 from the FireEye security threat blog.
2/2/2019
Talk about managing key person risk – Crypto exchange QuadrigaCX unable to pay back $190MM in its client holdings because its founder Gerald Cotten who had the only keys to its cold storage died unexpectedly while visiting India. For nascent industries like bitcoin exchanges, something like this massively erodes trust which will be very hard to rebuild for the other players in the space.
1/29/2019
Very interesting article from Kashmir Hill at Gizmodo who tried to cut out the big 5 tech giants (Amazon, Facebook, Google, Microsoft and Apple) from her and her family’s lives with interesting results. Goes to show you the monopoly on services these tech giants enjoy, that are critical to key aspects of our daily lives – whether it be using maps to navigate, or consuming news or listening to streaming content.
12/1/2018
Edward Snowden explains Blockchain
11/28/2018
Came across a very interesting blog post (The Internet Needs More Friction) today – reminding us that the lack of speed or adding friction can be a winning strategy in itself. Remember Brad Katsuyama’s Thor tool at RBC described in Flash Boys by Michael Lewis to prevent scalping on large orders by HFT Systems.
11/26/2018
Three really big pieces of news today –
11/23/2018
US climate change impact report paints a dire picture
A technology inflection point is where users move from an existing technology to a new technology. It takes a leap of faith for early adopters to make this transition and as the number of users adopting this new technology increases, there is a tipping point at which the new technology completely overwhelms the old one. A trust bridge is a mechanism that makes it easier for early users to adopt new technology. Understanding the underpinnings of a trust bridge will help you navigate an inflection point and drive adoption of the new technology.
The automobile moving from gasoline power to electric power is an example of one such transition (Case A on the framework mindmap). We take it for granted that we will not run out of fuel and be stranded with gasoline based cars, because of the numerous gas stations all over; the same is not true for electric cars which have a limited range and electric charging stations are not as common as gas stations. But the availability of accurate maps with locations for these electric charging stations, the accurate determination of how much charge is left in the car and the confidence that we can make it to the next charging station before the battery runs out, helps us make this transition (this is the trust bridge).
Ownership of cars is also undergoing a major transition (Case B). Earlier it was necessary for a person to own a car to use it regularly. Now with the availability of ride hailing services, it isn’t necessary to own a car. I have seen this with my own aging parents. My father has reached a stage of life where his ability to drive a car has deteriorated; So the transition to a ride share platform was great – available when he needed it without the fixed costs of maintaining a car and a driver. The trust bridge for him was the ability to see in real time the cars available near him and the confidence that he would have a car when he needed one; and for me it was to see in real time where he was on his ride, since he lives half a world away.
Another trust boundary on our horizon is the transition to self driving cars (Case C). Imagine how difficult it is for a majority of people to let go of control while driving, to a machine that will have your life and safety in its power while driving you from point A to B. The trust bridge in this case will be objective data on how much better automomous interconnected machines are at avoiding accidents than humans and the tipping point will be when it will be difficult for the die hard drivers who have refused to adopt autonomous cars to get automobile insurance.
Think about our transition from records, to cassettes, to CDs and DVDs and finally to streaming content (Case D). The last transition was the most difficult because it has meant that we do not have physical possession of the copyright content that we bought – but rely on a belief (trust bridge) that we could stream it when ever we want from the cloud and we can trust an Apple iTunes Store or an Amazon store to remember that we have purchased it and can access it anytime. It also implies that we trust these cloud stores to be available when we need them.
Companies moving from owned and managed data centers to shared cloud providers (Case E) is another example. The advantage of using shared cloud providers is that we have the ability to elastically expand and contract capacity, without a huge capital investment and only pay for what we use; Secondly cloud adoption has allowed for rapid evolution in container based application deployment where we have become completely oblivious of underlying server architectures, operating systems and technology stacks using containers like Docker and container orchestration options like Kubernetes; Thirdly with the rapid evolution in managing large amounts of data, a number of ML and AI frameworks are available as plug and play options. In the owned datacenter paradigm, bringing in such frameworks would have resulted in a whole new integration and onboarding project. The trust bridge in this case has been the rapid strides in cloud security infrastructure and capabilities for public, private and hybrid cloud offerings and the assurance of network, container, process and data isolation between clients.
In the Healthcare domain, introducing new products/therapies is also an example of an inflection point (Case F). The trust challenge is knowing the benefits and risks of a new treatment or therapy in the long term. Governments put in regulations to slow down the approval and adoption of any new therapy or drug until they make it through Stage I – IV trials. Once the new therapy has cleared these hurdles, adoption is a lot easier because people have faith in the regulatory process (trust bridge) to ensure that the new product is safe and effective before entering the market.
Similar is the case for abstract machine learning models and autonomous systems (Case G on the map). Some of the esoteric ML models may not be well understood or tested for all permutations and combinations. This leap becomes even more difficult in my experience when we deal with self learning or autonomous machine learning systems. Rather than letting adoption be a leap of faith for your users, it is critical that we understand user psychology – their fears & beliefs and build the trust bridge to facilitate adoption. I suggest the following process for evolving ML models (the model validation step will be the trust bridge here):
I have used the following framework to build my case for change. My experience has been whenever my team has focused on building a trust bridge in technology inflection cases or projects with significant change, the results have been very positive. I would love to have feedback on what folks have found useful when driving change.
PostScript:
Given the momentous changes we as a society are staring at in the near future like climate change, evolution of AI and genetic engineering like CRISPR, we need to make taking these trust leaps easier to keep up with the rapid changes in our environment, ideas, processes, products and services. And this is where a Change Management Practice that focusses on understanding user psychology – beliefs, desires and fears and being able to build the “Trust Bridge” will be critically important.
When you start talking about Machine Learning algorithms, most people’s eyes glaze over – it’s higher order math, something cool and distant that they don’t want to be bothered with.
So how can you engage in a meaningful conversation about these algorithms and demonstrate how and why they add value to make the case for implementing them?
We have had success in showing measurable and quantifiable results from predictions coming from these algorithms that are better than the current state process. Now, everything is not as easy to measure or demonstrate. Hence classifying your ML models into categories and having a measurement framework around each category helps.
The following elements are critical in establishing such a measurement system:
I have found the following classification useful for the work that my team is doing. Now this isn’t a comprehensive framework of all ML models available, but just something that we have found useful:
Will keep you posted on the results. Happy to hear about alternate frameworks and paradigms that folks have used to gain acceptance.
We have seen evolution of AI systems from the simple to the more complex: Going from simple correlations and causations, to model creation, training and advanced prediction to finally unsupervised learning & autonomous systems.
Human society has been seen to be comfortable with assigning accountability to one of its own i.e. a human actor who creates, authors or mentors these models and can assume accountability & responsibility. If you trace the evolution of our justice systems from the time of Hammurabi (sixth king of the First Babylonian Dynasty, reigning from 1792 BC to 1750 BC), to the modern ones in nation states today, we seem to accept good behavior within a well defined system of laws and rules, and digression from these attract punishment which is meant to drive compliance.
But will this always be true? Do we need new trust models and enforcement mechanisms?
Morals are the objective transcendent ideals we base our ethics upon. Jonathon Haidt in his exploration of the conservative and liberal morality describes 5 key traits – harm, fairness, authority, in-group and purity. Per his TED talk, liberals value the first two and score low on the other three, while conservatives value the latter 3 more than liberals.
Ethics are the subjective rules by which we govern our behavior and relate to each other in an acceptable manner. So which of these moral principles and in what measure should our ethical rules be based upon? And who chooses?
These ethics rules determine the system’s behavior in any situation and thus form the basis for the trust system we will operate upon with the autonomous system. (See the definitions of trust in my earlier post here)
Think about an infant that is born. He/she usually has a base set of moral frameworks hard wired into the brain and it is life experiences that shape how that model further develops, what behaviors are acceptable in society, which ones are not, what’s considered good vs. evil etc. The only thing that a creator can be held responsible for is the base template that he inputs into creating the autonomous AI system. Anything that is learnt post birth would be a part of the nurture argument that would be very difficult to assign accountability for.
If we consider an AI system to be similar, how do you provide a moral compass to it? Would you expose it to religion (and which one?) to teach it the basics of right or wrong or set up reward and punishment systems to train it to distinguish desirable vs. undesirable behavior. And again who determines what is desired and what is not – is it us humans or do we leave this up to the autonomous AI system.
When would we as a society be ready to take the leap? There are a number of thought leaders who have warned us about this including Stephen Hawking and Elon Musk. Are we ready to heed those warnings and muzzle our explorations into truly autonomous systems or is this an arms race that even if we bore restraint, someone somewhere may not act with the same constraints that we did…and finally was the purpose for us as a species was to develop something more intelligent than us that is able to outpace, out compete and eventually sunset our civilization?
I guess only time will tell, but meanwhile it is important to at least model ethical rules as we know it (similar to Isaac Assimov’s three Laws of Robotics – A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law) into systems and autonomous programs that we create but realize that our biases, desires and ambitions will always be a part of our creation…
We need a framework to establish certain base criteria for evolution of AI – something that will be the basis of all decision making capabilities. This core ROM which cannot be modified should form the basis of trust between humans and autonomous AI systems.
This basic contract is enforced as a price of entry to the human world and becomes a fundamental tenet for trust between us humans and autonomous systems allowed to operate in our realm.
As long as humans trust the basis of decision making upon these core principles (like Assimov’s three laws of robotics described above) we will operate from a position of mutual trust where we should be able to achieve a mutually beneficial equilibrium that maximizes benefits all around.
Given the CRISPR announcement today about two babies being born with their genes edited using CRISPR Cas9, its all the more urgent for us to establish this common framework before the genie is out of the bottle…
It’s a platform that allows for data collection, extraction, storage, exploration, visualization, inference & analytics and ultimately modeling to provide predictions and classifications.
Usually the business need is to predict the future and establish causation. In order to predict the future one must determine independent and dependent variables, causation vs. correlation and thus come up with models that allow for predicting what the future will be when we tweak certain causative factors.
There are mainly three key elements of data management –
1. Data engineering – is the set of practices that convert raw data to processed data by building data pipelines, applications and APIs. This process is concerned with how data is captured, moved, stored, secured, processed (transformed, cleansed and aggregated) and finally utilized.
The different stages within data engineering are:
Data acquisition is the process of acquiring data with concerns around the format of source data, existing interfaces that are available or new ones that have to be built, security (including authentication, authorization and encryption), maintaining reliability and finally latency.
Data transport manages reliability and integrity in transport, security, latency and costs and bandwidth.
Data storage deals with the flexibility in storage, choices around schema and schema less storage, high availability and redundancy and cost of storage.
Data processing deals with transformation, cleansing, filtering, enriching, aggregating and machine learning models for prediction.
Finally, Data Servicing is the availability pattern to end consumers of the data, dealing with latency, redundancy and availability, consumer competency in understanding and utilization of these data sets, flexibility of schema and ultimately the APIs for consumer applications.
2. Data Analytics – Is the practice of using the data produced by data engineering to convert it into insights and information. The tools we currently use in this space are Tableau and Qlikview reporting packages.
3. Data Science – Ability to use analytics and insights to predict the future using data and patterns observed in the past. The work includes integrating and exploring data, building models using such aggregated data, extracting patterns in past data and finally presenting results either through reports or model-powered applications. Some of the key tools we have used on our platform are Jupyter and RStudio for ongoing algorithm development, Spark for distributed execution and Kafka for messaging and data acquisition.
Big data introduces the following additional complexities in data processing:
1. Volume: the needs for the size of data sets to be managed and processed is usually a few orders of magnitude higher than in usual OLTP scenarios. This means additional resources, ability to scale horizontally and manage latency requirements.
2. Velocity: Needs rem time data and event handling with a view to be fast and avoid bottlenecks.
3. Variety: With data collections augmented by IOT devices in addition to the traditional data collection mechanisms, implies the need to manage text, images, audio and video.
4. Variability: data may be available in fits and starts implying the need to deal with spikes, an architecture that allows for decoupling and manage using buffers and finally the ability to maintain latency requirements.
Happy to share real life experiences on the above… please reach out if there is interest.
With the recent news of numerous data breaches and companies caught with questionable business/technology practices for managing customer data (which may seem to be in breach of public trust); the question that comes to one’s mind is how important is “Trust”? How much should you invest in maintaining trust in a proactive manner and what is the cost of “breach of trust”? How do you recover from a breach? What foundational elements of trust are damaged from such a breach? And borrowing a marketing slogan from the MasterCard Priceless campaign, is it fair to say – “Not having your company’s data breach on the front page of the Wall Street Journal: Priceless”.
Let’s look at some basics…
A few definitions that I have found most relevant –
Paraphrasing, social psychologist Morton Deutsch:
Trust involves some level of risk, and risk has consequences with payoffs being either beneficial or harmful. These consequences are dependent on the actions of another person and trust is the confidence that you have in the other person, to behave in a manner that is beneficial to you.
Patricia Jenkinson, Professor of Communications at Sacramento City College defines the various overlapping elements of trust as follows –
♦ Intent to do well by others
♦ Character – being sincere, honest and behaving with integrity
♦ Transparency – open in communication with others and not operating with hidden agendas.
♦ Competence / Capacity – ability to do things
♦ Consistency / Reliability – keeping your promises, meeting your obligations
Trust is important for us to feel physically and emotionally safe. With more trust, we can effectively and collaboratively work together towards common goals by sharing resources and ideas. When trust is high, we openly express thoughts, feelings, reactions, opinions, information and ideas. When trust is low, we are evasive, dishonest and inconsiderate.
There are two basic types of trust: Interpersonal with regards to one’s welfare with privileged information and relational commitment and task oriented with its dimensions of ability to do the task and the follow through to finish the task.
Yuval Noah Harari in his book Sapiens, describes “cooperation in large numbers” to be one of the key factors for human success over other species (which were physically stronger and much more adept at surviving the extreme elements of the earth’s environment). Trust allowed us to cooperate in large numbers and collectively gave us the ability to accomplish tasks beyond the capacity of a single individual. Chimpanzees also cooperate, but not in large numbers like humans which limits the capability of the clique.
With the advent of the digital age and large virtually connected social networks, our paradigms of digital trust have changed substantially. Rachel Botsman of Oxford University in a series of TED talks describes the transition from hyperconsumption to collaborative consumption, the evolution of trust from local to institutional to distributed.
This evolving distributed trust platform has three foundational layers (described as the Trust Stack by Ms. Botsman) – which allows us to trust relatively unknown people –
♦ Trust the Idea
♦ Trust the Platform
♦ Trust the other user
When there is assurance of accountability for a users’ actions as enforced by the platform (which has the ability to restrict future transactions by that user for bad behavior), there is implicit trust that the platform lends to transactions between complete strangers such as a transaction on the Uber or Air BnB platforms. One of the illustrative examples is how people behave differently (say cleaning up their room) when staying at a hotel vs. with an Air BnB host. In the former the expectation is that the institution will not hold them accountable for bad behavior while in the later the platform enforces this through mutual feedback and social reputation for both the guest and the host enhancing trust and ensuring good behavior.
Per Ms. Botsman, this is just the beginning, because the real disruption happening isn’t technological. It is fundamental to the way we will transact in the future. Once a trust shift has happened around a behavior or an entire sector, you cannot reverse this change. The implications here are huge.
A Simple Experiment
Daniel Arielly, a professor at Duke, in his TedX talk at Jaffa, describes a very nice social experiment. Suppose in a model society, everyone is given $10 at the beginning of the day – if they put this money in the public goods pot, then at the end of the day everything in the pot is multiplied by 5 and equally divided. So for example, if 10 people in a society were given $10 every morning and they put everything in the public goods pot, the pot would have $100, when multiplied by 5, would result in $500 at the end of the day and every one would get $50 back at the end of the day and everyone is happy. If the next day one person cheats, everyone except that person put in the $10 at the beginning of the day, there would be $90 in the pot. At the end of the day, the pot would have $450 and everyone would be returned $45 back. Everyone would notice that they did not get the full $50 back and the person who betrayed the public trust has $55. Dan’s next question was – what would happen the next day – no one would contribute to the public pot. His point being most trust games play out as a prisoners delima with a very unstable equilibrium where everyone contributes/cooperates and a stable equilibrium where no one contributes/cooperates. To maximize overall benefit, one has to ensure that everyone cooperates, and a single defection would ensure the overall benefit from cooperation going down. The moral of the story is that “Trust” is a public good, and an incredible lubricant for society. When we trust, everyone is better off, and when people betray the public trust, the system collapses and we are all worse off.
A number of companies have used transparency and a persistent reputation as a mechanism to keep people from betraying the public trust for example eBay, Air BnB, Uber etc. The cost of betrayal on these platforms is that the betrayer would not be able to transact on the platform anymore because of a hit to his/her reputation.
Also adding punsihment and revenge to the mix also changes the game. A reputation for being revengeful will prevent the first player defecting. The justice system and police are a common example of using punishment to keep the trust in society.
For companies that build trust platforms that allows for even strangers to transact, a betrayal of trust by the platform is much more damaging than a transgression by a single user on the platform. With such a breach there is the real possibility of users moving to the extremely stable equilibrium of not cooperating & thus abandoning the platform (loosing network scale is an existential threat) and moving them back to cooperating and using the platform is a herculean task.
No longer can we rest on our laurels by just calling ourselves trust worthy without redesigning our systems, process and people to be transparent, inclusive and accountable.
So remember, protect the idea first, then the integrity of the platform and then individual issues or breaches that may impact trust. Once trust is broken, it’s very hard to rebuild or repair.