Blockchain Based Control and Safety of Artificial Intelligence

Buzzwordy title alert.

Although there were many individuals worried about recursive self-improving AI, the alarms weren’t really sounded until Nick Bostrom wrote Superintelligence. For those readers who are unfamiliar with why superintelligent AIs, AGIs for short, might be scary, they can look at my notes or this post here. Long story short, an AI that is vastly more intelligent than us that isn’t aligned with our interests may decide something that isn’t in our best interest. 

The oft-quoted example of AGI, aka superintelligent AI, gone awry is the paperclip maximizer. While this example doesn’t exactly capture all the nuance, one can get the gist of the problem. An AGI is created whose sole goal is to create as many paperclips as possible, since it’s so good at its job, ends up killing all humans and turning all matter into paperclips. A more “human” example of an AGI gone away is a corporation, aka Enron or any oil company. Cash flows and profit, the internal metric of success or objective functions, they use becomes divorced from their original purpose of creating a good for society. Bitcoin and other cryptocurrency networks also represent some kind of recursively improving organism with no clear point of disconnect and have some individuals worrying about blockchains and AI. AGIs gone away would represent the principal-agent problem on steroids. You could well argue that Bitcoin, or cryptocurrencies are a version of this paperclip maximizer, especially the Proof-of-Work variants. 

The basic assumption that researchers in the field make is that AGI is going to happen someday. If not 15 years away, less than 100. 100 years in the course of the universe is nothing. Therefore, solving this problem of defining an objective function, or guardrails for an AGI is of the utmost importance. Sadly, this isn’t quite incentivized today. However, the work that has been done can be summed up as such:

  • Alignment: Making sure its objective function doesn’t kill us. Work that I’m most familiar with is coherent extrapolated volition and approval directed agents.
  • Capability restraint  For example, an AI that is air-gapped from the internet can give just yes or no answers, aka becoming a genie.

However, Bostrom presents another idea on AI control that I think doesn’t get enough coverage. In a few short words "make the objective function tied to the acquisition of some cryptographic token". While this seems unintuitive at first, it becomes akin to us trying to earn money, or dogs doing tricks for doggie treats. In the original proposal, Bostrom proposes to use a centralized cryptographic token managed by scientists. Superintelligence was published before this current hype cycle as well as theoretical work on new cryptographic primitives had begun. During that time, there’s been a little bit of fervor over how blockchains can positively increase the capability of artificially intelligent systems such as Computable by providing more data sets, not much has been written about the safety side.  (No surprise there). Here are some specific high-level proposals that can be stacked on top of each other to control and align agents.
  1. Use a decentralized cryptocurrency as reward function. This one is straightforward enough. Using centralized cryptographic tokens as the goal suffers from the same reason that centralized cryptocurrencies didn’t take off. They introduce the same single point of failure. If a scientist is somehow held at gunpoint by an AGI, he or she will probably hand over some tokens. It’s much harder to hold a network of miners and anonymous token holders at gunpoint.
  2. Instantiate an AGI as a DAO. This allows this entity to operate trustlessly, which is a double-edged sword. This allows the AGI to sustain itself and operate with or without supervision. But it also keeps an auditable trail of where and when the objective function cryptocurrency was added to the specific address. 
  3. Define reward function as a smart contract to be executed trustlessly. This is where it starts to get a little harder to conceptualize. We can state in plain English what something is. This matters for reinforcement learning agents. Objective functions in terms of Starcraft or Go, are simply to win the game. However, we may want to iterate/check up on the operation of the AI, and update an objective function as we go on, and not let the individual agent be able to change any part of the objective function. Then, use a widely distributed governance token, so pseudonymous actors can allow for changes to this governance token. Keep identities private so that the agent isn’t able to harass/bribe them. Monitor past voting behavior, by adopting a trail of “reputation” for voters to check for any bribery, this can also be determined on-chain.
  4. Use curve bonded tokens to get rid of “take over attacks”.Curve bonded tokens have programmatically defined prices for minting and redeeming (and then burning) a set of tokens. To perform any goal, the agent is probably going to have a lot of cash on hand. What if he tries to buy up a supply of governance token? That would be bad, then it could change its own objective function. To prevent this, we can set the curve for purchasing tokens at an absurdly high price as more tokens are minted. Corresponding, we can set an extremely small sell-out price to disincentivize any sales.
  5. Use TCRs (or some other game theoretically sound) ranked lists to tokenize “human values” and direct an AGI to optimize for that set of “human values".The previous example talked about defining a goal in terms of ETH held. That would be easily calculable if the goal of the agent was to maximize the NAV of its investment portfolio. However, as we know today, defining something just in terms of money can lead to some perverse outcomes. If the means of money become the ends, then that leads to greedy short-term actions that can be taken by agents.
  • Instead, we might want to optimize for human well-being. How do we define this on chain, so this measure can’t be hacked by an autonomous agent? We utilize decentralized stake-based rating games, namely TCRs with a curve bonded token for staking. You can read a little more about TCRs here.
  • Back to representing human well-being “on-chain”. First, we have to define how this is defined in the real world. Various NGOs and ratings orgs track things like the HDI, Human Happiness Index, and GDP per capita. These are top line objectives that countries may try to aim for, through actions that make individual citizens happy. Of course, countries are free to ignore these ratings as well. However, autonomous agents won’t be if their objective functions are locked down.
  • So how does that tie into the blockchain? These indexes have a large self reported component right now, and TCRs are good for encoding intangible and subjective information into hard economic terms. By creating this list that might be composed of “happiness”, “wealth for humanity”, and “sugar, spice, and everything nice”, we might have the agent take off-chain actions that benefit humanity.

The largest points of failures would seem to be the voters, especially if they have their identity revealed. Perhaps we can have less intelligent agents that vote on issues for the most intelligent agent, each with their own objective functions that need to be modified. With any organization or incentive structure, there always needs to be a balance between being able to change something and not letting the wrong actors change things. I think this game is especially fun to play when thinking through an actor that is vastly more intelligent than I. 

Inadequate Equilibria: Part 1

The world can be a depressing place if you look at the second-by-second news cycle. If you were to have a hundred year newspaper, it’d probably be mostly good news. Of course, we live in the second by second cycle. Much of life is predicated on living in and interacting with these relatively broken systems. This is something that’s played out throughout history. There have been proposed answers to these things from exit—Thoreau’s Walden and those who choose to live off the grid these days—to the revolutionary—communism and socialism to completely overhaul the system. Inadequate Equilibria is in the same vein as Freud’s Civilization and It’s Discontents.

As a human being, I think it’s critically important to contribute to this project. There are individuals that choose to exit the system. The Airbnb host, or the person that just lives off a 4% drawdown of their existing assets (aka a retiree at any age). Of course, indirectly, you’re slowly allocating capital to the right places, but this could be done much better.

As a startup founder and investor and general human being, much of my time in life is spent on these problems. It’s weaseling our way to find something that makes a big difference in the world. Many of the systems seem so hard to flip/change. While this is something that we know intuitively, In a quick 170 pages, Yudkowsky characterizes this in a clear voice without resorting to throwing up his hands. 

Starting from a theoretical basis, he seeks to answer the question, “why are so many aspects of the world not optimized to the limits of human intelligence in the manner of financial prices?"

Depending on your perspective, pouring all of our collective human intellect into optimizing finances over a short-term view could be heartening or disheartening. It can be disheartening to see so much of talent under-allocated to efforts that don’t seem to produce end results. After all, no one likes paperwork. Doubly so, no one likes paper work that doesn’t mean anything. Triply so, no one likes paperwork that was created by “the system” or “the man” and you have to adhere to it or else you won’t be able to eat, but “the man” won’t be able to eat so you can’t even change the system. 

The book can loosely be divided into three sections. This post is the first in a series.
  1. Laying out the meta concepts of inefficiency, inexplicability, and inadequacy
  2. Inadequate equilibria in all areas of society (to be published)
  3. Adding inadequate equilibria to your mental toolbox and life (to be published)
Much of technology can be characterized by the attempt to bring adequacy to human endeavors. Once upon a time, markets were so inefficient with respect to information that Ben Graham could make money buying stocks based on the market value being less than book value. Now that markets and systems are inadequate with respect to our wetware and incentives, these can be many stickier and harder to change. Of course, Charlie Munger has also often noted that he’s underestimated the power of incentives.

This book is a master course in rationality, society, and how to act that I think it deserves a separate post for each section. I’ll cover the first here.

“If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” But why? We’ve often considered financial markets the nervous system of the economy, the best way to relay information.

Allocating capital allows equities to have a lever.
  • Lower cost of capita RoE + RoIC, better capital structure, easier to retain employees because bonuses are worth more, can leverage stock price to acquire new companies.

Elizier introduces the concept of modest epistemology. The later debunked notion that you should trust the expert view most of the time, unless you really have an opinion/have put in the time. Often, this is the most social status oriented view of the world. 

Last, he introduces his self-treatment of his wife’s SAD (Seasonally Affective Depression) treatment as well as his dandruff problem.

With these three examples, he respectively illustrates the notions of inefficient markets, unexploitable markets, and inadequate systems. 
Type Example With Respect to
Inefficient Apple???/An Equity The average person. HFs can still take advantage of this, gathering specialized information.
Unexploitable
Shorting Real Estate/Bad Monetary Policy (Japan Example).

It is adequate to the point where there are not a lot of underpriced houses because you aren’t able to short a single house.
No financial product exists to short things. CDS/funds can take advantage of a systematic level
Inadequate Current State of Venture Funding, Colleges as Credentialing systems and the US Healthcare industry. The normal “in-game” view, God’s View or benevolent dictator can overcome

All these classes of markets or systems are adequate with respect to something. Markets are efficient relative to the average individual but not to hedge funds. The average investor isn’t able to find alpha, because changes are not predictable.

The view presented is that markets and systems have predictable movements in prices until they reach some equilibrium point, the so balance of supply and demand. Each individual agent is trying to sop up as much “free money” in the form of predictable price movements as they can. While inefficient markets are systems where the price can be the sole signal of value, inadequate systems are more complicated. Each agent within the system is trying to fulfill their own incentives. Whether that be striving for fame, curing individuals of diseases, their behavior is shaped by incentives. And right now, these incentives are out of whack. The ways in which they get out of whack are collectively known as “Moloch’s Toolbox”. (Sidebar: if you don’t know Moloch, then I don’t know you ;).) Collectively, these are the the tools below:

  • Principal-agent problem (people who make decisions who don’t benefit)
  • Information asymmetry
    • Example: Colleges act as a filter for 1) hardworking kids 2) smart kids
      • But 4 years and $250k is a lot to prove that...
    • Common knowledge -> how to things settle in this point
      • it’s a signaling/asymmetric information problem
  • Nash Equilibria of misaligned incentives, and not Pareto Optimal
    • Two-factor markets and signaling equilibria
    • Pareto optimal -> a single move that makes everyone better off.

Using these as a lens with which to view society is pretty powerful. But the real question is how we can break the grip of these equilibrium points? Usually, it’s come in the form of billionaires or those with the requisite skills + luck to reset the systems. Examples of these reset points might be Bitcoin and SpaceX. They both act as a reset for the systems that they are compared against, centralized banking systems and traditional contract-based space agencies.

While Moloch’s Toolbox is extremely simple, there are different counterarguments against it. On one hand, you can say that everyone is self interested and things won’t change because of that, a “cynical economist” view. Or, you can refer to the view that systems are bad because people are bad, and people are just bad at coordinating, a more nihilistic view. Either way, if you’re a startup founder or trying to change the world for the better, you’re fighting against multiple forces, the “system”, the “haters”, and the “cynics”. The problem is that the combination of those three forces make it quite hard to craft a clever solution. You can’t just build a better mouse trap and hope people will come, instead you need hit some critical mass for different stakeholders to “flip things". That is incredibly hard, but extremely worthwhile.

Early Adopters of Crypto

Attention is the most scarce thing in the world. On a macro level, the world is awash in capital. Interest rates in countries are below zero. However, within our daily lives, there are always thousands of things competing for our attention. A question I like to think through is, where are the early adopters focusing their limited attention. Chris Dixon says it’s people messing around in garages building something. A revised question along those same lines is: 

Which nation/market is an early adopter of technology? How do their market dynamics predict what might happen in another geography?

First, a little theory. The world is a connected graph of people. Word of mouth is the thing that really gets people to adopt products. Facebook decreased the six degrees of separation down to around 4.5. However, among this distribution of connections between people and connections isn’t even. When we think of information flow, it’s more of a uni-directional graph. This means that person A can influence person B, but not usually not vice versa.

When we think of how information spreads, I think of a tinder over a dry terrain. While something doesn't spark 100% of the time, but when it does, there's the potential for a cascade of "catching fire". Within a network, there are early adopters and late adopters. These people are differentiated by personality traits, sources of information, and levels of connectedness in both the upstream and downstream direction in terms of where they get their information. I usually split the adoption curve into three sets of people:

  • 1) people who do things because it is novel or cool. This is an intrinsic motivator. These are early adopters.
  • 2) people who do things because there’s an economic need. This is an extrinsic motivator. These are middle adopters.
  • 3) people who do things because everyone else is doing something. These are late adopters.

So now that we have that out of the way, this is my current mental model for crypto adoption.

I am increasingly looking towards Asia for technology and more specifically Korea for cryptocurrencies. Due to special features in what their graph looks like, they have interesting winner take all dynamics as well as being early adopters. Information spreads quickly because of the connectedness and centrality of its social graph. The whole nation using Kaokao, has high-speed internet access, a high appetite for novelty and coolness, very tight-knit business communities, and have historically been early adopters of new technologies. Before the States got around to these things in Web 1.0, Korea was already on top of camera phones in the early 2000s, playing MMORPGS and other things, and over the top streaming (aka Netflix).

Bill Gurley and associates caught onto this trend and planned a trip to Korea to see what might be gleaned from this market. What resulted was a sharpening of their thesis around Social, Local, Mobile. When the iPhone hit everyone’s hand in 2008, we had the confluence of the internet, GPS, and camera in every pocket. And the rest is history, that Benchmark fund invest in a plethora of internet hits most notably Uber and Snapchat.

The current environment for Korea is pretty telling. 30% of South Korea owns or holds some sort of crypto, past the tipping point for widespread social adoption. When the regulators tried to shut exchanges down, HODLs raised their voices. I’m excited to see how individuals interact with token powered protocols as usability and scalability allow us to fall down the Marginal Benefit Curve of cryptocurrencies. While we’re still stuck at the store-of-value and the speculative era of cryptocurrencies, that should change soon.

Even now, as staking protocols begin to proliferate, crypto holders are looking to gain an edge in earning incremental token. We should start to see use Vest and Compound.Finance gain adoption as the usability of protocols begins to drop.

I’m personally not as bullish for developing countries as leading indicator as early adopters. As weird as it sounds, they need cryptocurrencies too much. My mental model for early adopters are the ones that like toys, the weirdos, the rich people and more that are willing to accept the flaws in the product. There’s something about intrinsic motivation as opposed to extrinsic motivation that drives the sickness and retention of a product/technology. I would much rather look towards the high-risk tolerance ICO investors than look towards traditional business and crypto “enterprise alliances”.

Research Coin v3



At this point, the thoughts contained in this post is quite old. However I wanted to publish this as I've been tinkering around with a new formulation of a protocol. It comes as an extension of thoughts by Nicola @ Protocol Labs. 

Research is really expensive, a public good, and has nastier power law returns than startups. The graph above shows revenues generated by patents, the step that comes after publicly funded research. It took  10k patents produced at Northwestern at a yearly cost of at least $675 million dollars to produce one patent with licensing revenue of $1B/yr. That's a cost of $67,000 per patent to get this holy grail.

Bell Labs spent over $10B in inflation-adjusted dollars on research and brought together the most incredible minds in an incredibly productive environment. Some of the end results include the transistor, which we all can thank as the earliest baby step for you reading this article.

Today, academics and funders complain about the misalignment of incentives for funding science. That's a discussion for another day. 

What we'll talk about today is a potential mechanism to fund basic research at scale. We want to do this to produce the research needed to generate these valuable patents. As well as rewarding the scientists, the individuals who actually generate ideas.

The core idea is recursive payments and ownership instead of just betting on getting accepted into a conference or something.

Units of the research coin system:
  • Paper 
    • Papers have owners
    • Papers have citations to other papers
    • Papers each have a token. This token is distributed to the owners of this paper, and to citations of other papers in the mechanism described below.
  • Owners are types of people. Can be an individual contributor, or an organization like MIT or something else.

Why staking and markets instead of social style peer-reviewers?
  • Peer review and prestigious journals are proxies for the long-term value of a paper. If we develop a market around each individual paper’s value, then this might be a good way to get rid of social gatekeepers of conferences and journals that incentivize “flash in the pan ideas"
  • There are already too many papers out there. Staking might open the door for algorithmic researchers as well as peer reviewers. 
  • It could induce more reproducibility studies since they are valuable but don’t get published in the flashy magazines if people can figure out a way to capture value in that (by shorting the weak paper???), or by purchasing a share

Why papers and owners instead of organizations?
  • Owners can be anyone who is holds an interest, perhaps an author or organization or something else (a DAO)
  • Token could flow directly to researchers, who may do better jobs of funding and finding talent rather than bueracratic organizations.
  • Should owners have a token that people can purchase???
    • A market for organizations???? This may be out of scope for this, as it seems that organizations could just be wallets, and people could potentially own a share of these if they wanted to.

Why a token?
  • Tokens help align value. They establish a clear unambiguous signal for a paper's value, while citations are social and a little bit messy (vanity citations, you scratch my back, I scratch your kind of thing)
  • Seems like this is a utility kind of thing. 
  • Maybe you could just use ETH to place bets and do payments, but it seems like you should definitely have some research coin for governance and staking.

Would people invest in research coin?
  • Researchers need to purchase or use research coin to review a paper.
  • People would put money into research coin because it should be a better science/provide more socially useful results than what is currently performed right now.
  • You purchase research coin because you think it produces a better body of science

What's the mechanism?

  1. Research coin distribution event (ICO? or do airdrop to researchers and other stakeholders)
  2. Each paper, when hits preprint server starts a game. Perhaps authors of papers have to stake money as well???
  3. Owners stake research coin so they can peer review this paper.
  4. We gather a set pool of staked money for the paper. They decide on the validity of the paper. And how to initially distribute the distribution of said paper’s individual token in proportion to owners and citations to other papers. This is some type of Schelling point game.
  5. Accurate Schelling point people are rewarded with some new research coin (in some proportion to how much was staked). Slashing the stake of bad reporters.
  6. Once paper Schelling point is set, then distribute locked research token recursively to owners of said paper’s token with pro rata of Schelling point. Intuition is that peer reviews want to review important papers and therefore will stake tokens to do this. More important papers accrue more staked token. More token flows recursively to the owners of the paper’s token. I guess this is technically securitizing basic research IP, lol.
  7. Markets develop for individual paper’s token. These may, later on, yield great research results and therefore generate recursive payments of research token. As more papers get published, flow of money goes recursively to the paper parent papers/owners. The price of the token’s paper, denominated in research coin may 
  • Recursive ownership is important because it incentivizes research with the greatest NPV in terms of research coin.
  • Researchers who publish should get steady payouts as more papers cite them, so they can continue to fund more research.

Additional thought
  1. Bounties for research can also function within this space as well. i.e “I am putting X research coin up for grabs if you can solve this problem and these people can verify it’s validity”
  2. Seems a bit complicated,

Related


Economic Returns of Casper

I recently set up a small mining rig to mine Ethereum. My housemates have audio-visual evidence of this. My first thought was to mine Ethereum. However, the big thing looming over this particular foray into hardware is the switch to Casper, Ethereum’s new PoS. When that switch happens, I want to start staking ETH and participating in that consensus protocol as well.

On ethresear.ch, you can find active discussion spurred on by Vitalik and Jon Choi on the potential economic outcomes of the switch and how they might drive monetary policy.

We can look at the current rate of return on PoW mining right now. While the profile of stakers vs miners may be completely different, I wonder if the total deposit level will be adversely affected. We may have fewer deposits than we have posited, somewhere between 0.1% - 0.5% of total deposit (TD) or 60 - 300M of USD worth of deposits. I arrive at this conclusion now by looking at the current rate of return that miners get compared to what is being discussed on the site. At the current return, we’d only have about $300M in staked deposits. Which feels quite low to subvert a $70B dollar chain.

The market-driven rate of return for the consensus protocol driven returns in relatively stable protocols (BTC, BCC, ETH) has remained relatively high in comparison to ranges shown in the Google Spreadsheet that Jon has shown. The range between 20% (Equities) and Multiples (Cryptos/startups) is very large. The current PoW yield is closer to a startup's risk-reward profile than an public market equity, with an estimated yield of 150%--back of envelope math below.

Given the current hash rate, factoring in fixed, variable (electricity), and non-recurring engineering costs such as physical space to find the current yield, outside of appreciation. Right now, given the price of ETH, it’s pretty damn profitable to mine. I arrive at an estimated yield of 150% per year. The total cost of the network including the aforementioned costs is $3-$5 Billion. Of a security to network value of 5%.

This checks out as well, given that the rate of return on a single GPU is around 7.5 months for payback period for an NVIDIA 1070. 

It seems like we might see a much smaller TD Ratio given the market rate of return on mining now. Given the stated target inflation rate of 0.5%, I’m afraid we might see a much lower participation rate given the modeled yield. PoS with the 4-month lockup is seemingly based on the same risk/reward and liquidity profile of PoW. PoW is potentially even more liquid given I can start mining on some other token if the price of token drops. Of course, the biggest driver of this is perhaps that returns from HODLing have been so extraordinary right now. After all, we know the price of ETH has basically 100x YTD. When the returns for crypto assets start to stabilize, we might see PoS return to being a stable source of returns 15% - 20% not including appreciation of assets seems pretty good [1]. 

  • Looking at current hashrate gives ~150000 GH/s. an NVIDIA 1070 GTX gives ~30MH/s, so there are approximately 5,000,000 GPUs working to secure ETH. These GPUs each cost $500. If we estimate that overhead expenses are 1.2x of per GPU cost, we arrive at an all in fixed and NRE cost of $3 billion dollars
  • If each GPU is pushing ~150Wh, Electricity costs are 5,000,000 / 6   * .05 * 24 * 365 or 7,300,000,000 to 10,550,000,000 kWh/y. An all in electricity cost per year of $365,000,000.
  • Taking that into account we have $3 - $5.19 Billion / $30 Billion network or a TD Ratio of 11 - 16% 
    • We’re paying out $3,858,570,000 in USD, or 12,861,900 token per year, 13.83% at $300 per Token
    • Yield of 74 - 100%
  • At a $700/ETH price, we have $5.19 Billion / $70 Billion network or a TD Ratio of ~4 %. This is with the current inflation rate of 15%. Miners currently zero out at 7.5 months, so 4.5 months of profit. This gives a yield of 165-55% hmmm…
    • We’re paying out $9,000,000,000 in USD, or 12,861,900 token per year, 13.83% at $700 per Token

[1] https://ethresear.ch/t/casper-validator-yield-as-a-function-of-td-and-issuance/222



Crypto's Ladder of Abstraction

Like all good blog posts, this one starts with a tweet. In this case, I can point to Nicola for spurring this one. Niraj and I previously collaborated on a post called “Merging Chains”. You can think of this in the same spirit as that post.


Portability has been a great side-effect of abstractions for computation, higher-level languages and will have the same effect for the decentralized world as well. In the centralized world, we have Dropbox, Google Drive, and Evernote. These all let us take our information wherever we want it. Whereas before, the existing model was a thumb drive or involved clunky data transfers. The internet helped pave the way for user side abstraction. When we wanted to upgrade devices, we didn’t have to worry about our data. On the dev side, we’ve seen the evolution of serverless. Preceding that were IaaS plays, namely AWS, and preceding that you had to rent out hardware and co-locate a hardware. 

Right now, a lot of effort is spent building on a base layer Turing(ish)-complete stack-based machine like Ethereum. While Ethereum remains a market leader right now, things might change. A 0day exploit might appear, someone very influential might die within their organization, or a switch to PoS might actually prove to have a bad security model. Those don’t necessarily reflect what I believe but rather are stated to show some “existential” type risks that might compromise a base layer protocol.

In theory, a mature dapp built on top of Ethereum, shouldn’t derive much of its value from the security model of Ethereum. In theory, a dapp should be able to move its contract state to another base layer protocol. Another way to look at it is again through the lens of history and greater abstraction previously mentioned. A user doesn’t really care whether Dropbox uses it’s own servers or is hosted on AWS. Of course, they care about getting their information lost or stolen, but that’s up to the developers to worry about.

As mentioned previously, developers today don’t have to deal with renting servers. Developers on Ethereum don’t have to write EVM bytecode either. We’ve already seen people build on different platforms. Kin moving to Stellar rather than building on Ethereum, at least initially. I do have a gut feeling that the switching costs may be less than people think, especially since new base layer protocols are taking the tack of enabling the EVM already, like RSKSmart. Also, the Ethereum state trie is already publicly available, and that lets people do airdrops and such, like EtherMint.

And of course, Ethereum abstracted away the messy world of bootstrapping your own blockchain, secured by miners.However, as we build upon this world of abstractions it’s easy to forget that these build on real components, while you can write in a high-level language, your code is still executed by self-interested miners, and that leads to interesting side effects and security concerns.

Ryan Shea and co spent time thinking about migrating state for onename, so this isn’t a thought that is completely out of the blue. Of course we’re seeing protocols such as Cosmos, Polkadot, and aelf now being presented as partial scaling solutions. Hopefully, they'll allow protocols now built only on Ethereum to work on other base world computers with ease.

In the formally-verified future, dapps and protocols will compile down to multiple VMs. Users and developers might not have to worry about a break down in the consensus mechanism of any one base layer protocol. A “meta”-token that wraps both the native ERC20 and whatever the token specification is for another base layer protocol will exist. Maybe these token prices will be pegged to each other, Or value will be accrued in proportion to the amount of state that they actually keep. In this way, the different base layer protocols may just be different shards on which protocols interact. Already, some tokens are looking at building on both Ethereum and NEO

If this vision of dapps on multiple chains does play out, competition between base layer protocols based solely on the dapps that they host may not be a long-term competitive advantage. Again, that hypothesis is premised on the belief that switching costs of state are low, and it does look like that is happening. If a protocol advertises the fact that they have a competitive advantage just because they’re building on a certain VM, that isn’t going to be a long-term advantage.

I don’t really offer up much in the way of analysis but just a bit of observation that we’re in the early days of crypto. There are so many rungs on the ladder of abstraction yet to be formalized and built. It's not immediately clear how scaling, where the points of friction and therefore economic value will be long term. You might say that the tokens that have the largest network effect will win out, i.e. Ethereum. Yet, the network effect argument is self-referential. It is intrinsically so, the more people use it, the better Ethereum gets. But the more people that leave an ecosystem, the more unstable it gets. With flows between addresses on chain and economic value exchanged cross-chain all available in real time on decentralized exchanges and on blockchains. We could see, in real-time, the shift in network effects from Ethereum to a hypothetical competitor. We won't have to wait for Facebook to release it's latest earning's report to show that it churn some X% of users. Please talk to me if you think I'm right or wrong :)

So What's in a PhD

I remember watching Dragonball-Z, where Gohan’s mom, Chi-Chi, wanted him always to get a PhD. This really hammered home of getting the importance of the credential, the PhD, to be recognized as an expert. However, since that time I’ve become somewhat of an autodidact that learns just for the sake of. However, I recently tweeted this:


The response was surprising. However, I stand by the statement. I first stumbled across this quote while reading "The Mathematical Experience.” The "80 book benchmark" shattered the final remnants of the childhood illusion that you need a PhD to become an expert, a PhD being some mystic level of achievement. In its place now stands a new belief, that becoming expert-level is not that hard. It’s a concrete milestone that anyone sufficiently motivated can achieve.

I really like this 80 book expert benchmark because it has all the classic signs of a good goal. It’s measurable, achievable, but still decently ambitious, especially if you love books. Becoming an “expert” is not that hard, especially if you don’t need the credentials. And thankfully, if you work in startups or are creating something, credentials are not that important. If you really do need credentials, you can always hire someone with the right three letter acronym.

80 books, while seemingly daunting is not that bad. An average US working citizen spends almost an hour commuting to and from work. If she decided to use that time instead of ‘gramming or texting and instead read, she’d be able to get through a decent amount of a book per year. For a book printed with normally-sized font, a reader of this blog could probably read a page per minute. This includes the appropriate highlights made in-text for subject matter retention. That means you, dear reader, could probably finish an average-sized book per week, ~360 pages or ~50 books per year. You could get a PhD in 2 years with time to space! [1]

The eighty book-mark is also great because it illustrates how little knowledge an individual needs to know to become an expert. Within startups specifically, the low barrier to becoming an expert makes investment decisions in “inexperienced” or “young” founders less risky than they actually are. I’ve already written on how young founders often found the biggest, baddest, and best companies. If you believe the thesis of this piece, then being young is less of a disadvantage because it’s so easy to get up to speed in an industry.

Expert-level specialization is very real and necessary. Even a small town library will usually have at least a few thousand books waiting to be checked out. If we only know 80 books worth of knowledge, it’s hard to imagine how you’d be able to build a multi-faceted business. Also with knowledge expanding at an exponential rate, it seems even more daunting. This is one of the reasons why being an expert or getting things done in the world still requires you to collaborate with others and/or use tools to manage knowledge.

Of course, the 80 book goal doesn't cover all the nuances of being an expert. On Twitter, others brought up several counterpoints. First, books aren’t always the best source of knowledge. I think this is certainly true. To the original goal, I would then add the caveat that, you do need to read 16000 pages—or 80 books worth of material at 200 pages per book. This is especially true in fast-growing fields such as blockchain or AI, where preprint, blogs, and Twitter. Where you choose to get the 16k pages certainly makes a difference in what you learn. The best practitioners are often the ones that aren’t teaching the subject. Their knowledge is either much more implicit, or codified in a much more free-flowing form factor such as a blog post. Take, for example, some of Vitalik’s writings on cryptocurrencies. If you’re getting into crypto, his posts will serve you much better than any book proclaiming that blockchains are the second coming of the internet.

Another common retort to “80 books" was that being an expert is mostly about creation. However, I’d say people still need some base level of knowledge to be able to be productive in a field, and as we’ve established before 80 book-length pieces of information or 16,000 pages or two years of learning seems about right to me. You’re probably familiar with the “Whartonite Seeks Code Monkey" or “I can handle the business side" meme pages. In short, they both poke fun at B-School students who don’t really understand the mechanics of product or startups. When I first read TechCrunch and watched the Social Network, I 100% asked a technical friend of mine the same questions. I didn’t have the requisite mental models on what a “startup” was to know why this was a bit silly of a request. Yet after reading blogs, working on products, and talking to folks to get the implicit domain knowledge, I now do. More generally, understanding the domains lets you know what's at the "adjacent possible", the stuff that's hard enough that no one's done it yet and not impossible. In physics this would be the difference between working on gravitational waves and working on time travel.

I look forward to getting my PhDs in bio, brains, and blockchains soon :)

---
[1] The speed at which a person reads will definitely depend on the subject matter as well. While reading Molecular Biology of the Cell, I read at approximately 15 pages per hour while taking detailed notes. At 1000+ pages, MBOC would take me ~70 hours to read it cover to cover. A normal college-level bio class probably covers half the material in the book. So I could a semester in ~40, or a normal work week. Of course, the caveat to this is that I can read 8 hours per day… Of course, I don’t, but for sufficiently motivated individuals who find the subject matter at hand interesting, you could do it. Warren does it.

Some Thoughts on "Confessions a Sociopath"

While browsing bookstores in NYC, I stumbled across a striking cover. A porcelain mask. Female. Red lipstick, with the attached popsicle stick handle. My eyes wandered down to the title in the bottom left-hand corner--“Confessions of a Sociopath”. Intriguing yet hesitant, as I don't normally read pop psych, I picked the book up. I put it down twenty pages later. I didn’t purchase it. It was a little bit too spooky for me. When you're left with a new lens with which to view your friends, colleagues, and possibly self, you’d feel the same way.

I ended up purchasing it at another bookstore later in the same day.

M. E. Thomas, a pseudonym, writes in an extremely readable transparent style. The compact volume of three hundred or so pages reads a bit like a diary, which is what a sociopath would like you to want. We want to feel like we know the other person. Yet, true to her sociopathic nature, the prose is lightweight, easy to reach, and a bit detached. Just what we’d want in a fling, to be drawn in, to imprint our own desires onto, and to be left wanting to know more. An early moment that we experience is mom and dad driving away forgetting us at the park. A moment that “normiopaths” or “empaths" would regard with fear, tears, or some other visceral reactions, M.E. takes as a chance to prove that she can live without them. M. E. reveals nothing, and with this style, she draws us into to her inner world. 

We follow M. E. as she navigates growing up in a somewhat dysfunctional household, and matures into a beautiful, intriguing, and cold young woman. Some of her experiences as a child, I think readers may be able to relate to, especially if one were an outsider or immigrant to a new community. When you come in as an outsider, there are cultural norms, language cues, body language differences, inside jokes picked up innately for some and intentionally learned by the outsiders. The difference here is that, for M. E., the language to be learned is that of emotion something we might take for granted. The only strong desire she expresses is that of power, for control over her environment and all the people around her.

We discover her how she manipulates people around them, often without them knowing, learn, as she does, that emotions play no part in her mental world, and rules that don’t advantage her can be broken. We’re often reminded of rebels, criminals, and vampires, the darker archetypes of our mythology—characters with which we are enthralled with, at least in the aspect that they have freedom from internal and societal retribution. By continually drawing on examples from literature, particularly from Steinbeck, we’re reminded of favorite characters and perhaps about people in our own lives that fit this sociopathic mold. Not only does M.E. draw from these sketches, but she also draws from brain imaging and clinical research, as well as, clinical definitions for psychiatry. This gives this extremely transparent, personal narrative the touch of scientific authority without being too drawn out.

The worlds of work and love figure heavily in this book. Sociopaths, as we learn, turn out to be tailor-made for corporate capitalism. Money, that impartial thing that so much of daily life is centered around, is a sociopathic object. It can be transformed into whatever desire that we may hold. Within jobs that require stress, acting, or even normal office politics, sociopaths are able to lie and win their way into higher and higher positions. They're better able to deal with the stress of firing or launching a new product better than we can. However, we find through personal anecdotes that a cutthroat character isn’t always as good as it seems. The same impulsive behavior become less reliable at creating long-term relationships needed for management positions. I often thought of Steve Jobs as a possible archetypal sociopathic CEO, driven by a great product, through a path of scattered emotional breakdowns.

We later turned to the subject of love, and as noted before, it’s more than tough to maintain a long-term relationship because the default position is to be what your lover wants you to be. But as we know, vulnerability, that is being your true self, or at least acting and speaking as if you don’t have anything to hide, is the key to long-term relationships.

When we look around the office, or our college campus, or even in our loved one's heads, we often wonder what is going on in the behind their eyes. In a certain way, while reading, I was reminded of the Turing Test —how can you tell whether this thing, producing some output is intelligent (and/or) conscious.  To extend the metaphor, the Turing Test is for emotions. "Do I actually know what this person is feeling at this moment?” It’s a bit frustrating. There will always be that lack of understanding that we always will face when dealing with people, just because we haven’t lived the exact same experiences as them. 

How do we know our lover’s smile is genuine? What if like a chameleon, our lover may be producing this contortion of facial muscles to provoke the response we so desire? The ends that they want may not be just to please us, but they may be planning, plotting three steps ahead, using that goodwill generated from that smile to cajole us to change the channel to whatever they wanted.

At the end of the book, we’re left with M. E as she goes about her life without a care in the world, without attachment, yet desiring of a real connection wanting kids. And struck by the normalcy of it all. These are the desires that all of us feel, our mental worlds have just happened to mold our perceptions in a slightly different arrangement. Our biological drives, along with our upbringings can really make a difference in our lives.

If you read "Confessions of a Sociopath", you will wear the sociopaths mask. For some, you may that it fits your face perfectly. You may gain answers to some pesky questions that you’ve always wondered about yourself. If not, you may be disgusted and off put, but you will certainly wonder more about the man on the train with a certain glint in his eye. What is he thinking? How does he feel—if anything?

Biocomputers

A mostly speculative post on the far-ish future of biology.

This essay’s a spiritual success to my previous post on the subject. If you’re an investor feel free to invest with that essay’s thesis in mind :) . I’d like to take a few steps forward into the future and try to reason backwards to where we are now. I began the other essay with a comparison to the mainframe era, and I’d still like to draw on the computing metaphor.

Most people identify Intel + the microprocessor as a key innovation in the whole computing revolution. The same could be said about the Apple II, which finally incorporated the microprocessor into a consumer-ready, integrated product. I won’t argue for against either for marking a new age. Either way, those technologies were unequivocally tied together, they bookended the period where the microprocessor led the way to general purpose computing for everyone.

The integrated circuit was the culmination of billions of dollars in R&D, and today the heir to that technology is the iPhone 8, which holds some $150 trillion dollars just in transistors from 1957. These devices let you do essentially anything and were the corner stones of global communications and global money. A person could live their life with just a phone.

I wonder what set of innovations might allow for the equivalent exponential jump in biology, the microprocessor for biology. What’s the equivalent of a general purpose computing device in biology, and why would we even want one? 

First, let’s look at the definition of the microprocessor according to Wikipedia.

"The microprocessor is a multipurpose, clock driven, register based, digital-integrated circuit which accepts binary data as input, processes it according to instructions stored in its memory, and provides results as output.”

If we swap out binary data for DNA, that sounds a lot like what a nucleus does. The speed and accuracy with which we can create new strands of DNA is limited right now. Biology is, of course, general purpose. The same DNA that codes humans can be used to code algae. However, most DNA is assembled for a specific purpose. The software, the ACTGs of DNA are still way too expensive to sequence. Additionally, de novo gene synthesis and assembly, or making long DNA strands from scratch, is doubly plus expensive. While we herald a $1000 human gene sequence, and soon a $100 human gene sequence, it really needs to be close to zero. While a single base pair costs $0.02 to synthesize, this also should be close to zero. 

Why do I think $0.02 is way too high? Well, think about it this way. If every line of code cost $0.02, we would not have operating systems or any of the wonderful things we depend on today. To get to truly ubiquitous DNA manipulation the cost has to be ~$0.00000, like manipulating electrons in a personal computer.

In short, a biological microprocessor, a bioprocessor for short would be able to manipulate DNA and spit out the results, or biological and chemical components of whatever we wanted at near zero costs. An integrated biocomputer would take the inputs, the single cells, the small molecules, blood drawn from individuals, other enzymes, and return new cell the right genes inserted. Attached to main bioprocessor would be other chips such as microscopes, perturbation devices, electroporation devices, incubators, bioprinters, fluid + solid handling devices (think needles and other things), as well as being connected to traditional chips.

Fundamentally, having a digital bioprocessor or some personal computer equivalent could lower the cost of creation by several orders of magnitude. The table top sets we have today for home biology are the equivalent of the HAM radio sets. So it will be some time before we have anything really cool. However, biology holds this same property of being an information science. However, like the pre-personal computer + internet era, we had to go to separate sources to gather all of our biological material. We travel to the grocery store, we go to the mall to buy creams that are synthesized by snails, we go get surgery + pay $ to look different, we go to the pet store to get pets, even the clothes on our back are made from organic materials. If we can download creams, seeds for foods to be grown, and drug treatments, we could enable biological creativity like we have in bits.

One use case that bioprocessors could dramatically influence is human drug/medical treatment. Martin Shrekli and the latest EpiPen snafus could be avoided by at home production of molecules and treatments. If the cost of treatments are zero, then how are drugs/treatment costs to be amortized? To create a blockbuster drug today costs billions, so what happens when individuals are able to “download” medicine for free? Of course, this is a moral dilemma. Orphan drug disease gene therapy treatments cost consumers $500,000 for one treatment. That seems a bit outrageous. 

Business Models for Biology

Bioprocessors should hopefully have two first-order effects on biology--decreasing the cost of production and distribution. We just have to look to software, as we’ve seen with the internet, a radical shift in the costs of distribution has, and will continue to reshape industries. 10-1000x cost reductions leads to startups. Disrupting industries. With the internet, everything either became free, had a SaaS/APIs model attached, or birthed a marketplace. Each download or use will cost some amount, like hitting an API endpoint.
  • Music -> piracy (zero cost distribution) + lower production cost = free initially, but now SaaS model, litigious for sure.
  • Movies -> high production costs, lower discovery/distribution cost =  SaaS model (Netflix)
  • Banking -> high production/integration cost = Now have an API for this. We have Stripe.
  • Housing -> high production cost, high discovery cost = Marketplace Model (Airbnb)
The same will happen with biology. The effect on food will be different than that of pharma and that’s related to the market dynamics of production, distribution, and reputation. All these elements add to transaction cost, and as we know, transaction costs govern where fat businesses are made. Sit on top of a fat pipe of transaction costs and win money for a long time. A worry people have is drug piracy. If the cost of downloading a drug effectively drops to zero, then what happens to the dollars that need to go into research.

There are a few effects of a bioprocessor and associated peripheral devices could have on drug development. The cost of research should be way lower, allowing more drugs to come on the market, however determining efficacy will still be hard, so brands or marketplaces should establish themselves.

However, free in biology isn’t necessarily bad. People don’t always need to be motivated by monetary ends (directly) to contribute, the Debian ecosystem has had ~$20 billion of work put into free software. And this isn’t just random stuff. It runs on almost any internet connected server. We depend on it for critical infrastructure. We could potentially have freely designed seeds that are pest resistant that farmers could use instead of ones controlled by the huge pharma companies.

We might have a SaaS like business model for individuals to purchase treatments (Illumina, Gene Therapy Market??? -> have the right idea). However, we’ll have to deal with data security. Medical records are worth 20x your credit card information on black market. There is no way that I would want my health information to be hacked. A more fun SaaS business might be custom designed hair product and colorizer. First, input a strand of your hair, enter the desired hairstyle and texture, and out comes a specially designed set of creams that actually changes biological hair growth from the follicles, If we actually change the follicles, then we can change the color and texture of our hair at will for longer, cheaper, and safer than we do now.

If we go to space, we’ll certainly need and want different biological tools. Space radiation can kill, just as scurvy killed people. Space radiation can also be curtailed by 4 SNPs that potentially could be free. A digital biocomputer would be a necessary tool. We’re not going to have a lot of space on those space ships and we’re going to need to bring a lot of things. The best way of compressing things is through just information.

All these of these are possible arrangements for how the bioprocessor changes production and distribution of organic materials. But we’re sadly still a way’s away.

Today: Complexity

Computer scientists severely underestimate the complexity of even single cells. These things are really, really complex to model and build, especially if you want to get to atomic scale precision. Atomic scale precision is often what you’ll need, after all polymerase is atomically precise. It manipulates several atoms into place, and we can thank evolution for that. We only have several mutations per our few billion base pairs. To do that level of simulation we need to assume Moore’s Law continues 50 years into the future (so we’ll basically need Quantum Computers to continue that trend) to simulate one cell. For a whole brain simulation, we’ll need 100 years for that. Another example of complexity is protein structure.

We’ll either need to reduce the modeling accuracy of our systems (as we’ve done with deep learning) or use biological techniques in addition to computational models. We can use bioprocessors as a model of studying, directing the evolution of cells, of creating anything we want. On our way to a glorious biologically infused future we have many roadblocks to creating components for a bioprocessor and or personal biocomputer.

A future post will speculate in detail on 1) what a bioprocessor actually looks like 2) who’s working on this stuff now and 3) what else is holding us back.

Merging Chains

Written by Niraj and Dillon

If you posit that bitcoin has a network effect (the more people that use the currency to transact), the more valuable the coin becomes. The more valuable the coin becomes, the more users you get and the higher the network effects. Additionally, if longer chain history means better security and more miners mean better security, in the long run, is there a way to increase the network effect by merging chains?

Right now, we've only got people doing forks. Forks are important, they allow for experimentation on the rule sets, however, they potentially may reduce the overall network effect of any single token. Forks are really good because they align incentives between the people who have already done work on the master chain. In the example of ETH and ETC, we argue that it's a feature that the Ethereum Foundation automatically held ETH and ETC without their permission. They could gain in economic value of another development team. The new development team wins because they get an input customer set, the set of pub-priv key pairs that already holds ETH. This is a subtle shift in incentives, we'll write more about this later...

While we're not advocating for a maximalist approach, the idea that there should only be one token ever, it just seems like there needs to be a process to merge chains just as there is the process of forking. There is an argument to be made that "Core" or the Foundations bearing the base token name centralize development resources. In BTC and ETH respectively, only 5 and 2 developers make up the majority of commits. Forking seems to have to have become a way for talented devs to work on protocols. Just look at LTC and @satoshilite.

Additionally, we see that experimentation has been a net positive for society in other areas. Allowing for experimentation and merging isn't limited to blockchains. Just to name a few:
  • Policy experimentation within a federal system of government. I.e. adoption of a precursor to the Affordable Care Act before it became national law.
  • Startups as new entrants that can be acquired or grow to be large companies.
  • Spin offs from large corporations. Standard Oil became several smaller companies and Rockefeller was richer for it.
  • Mitochondria being swallowed to become the powerhouse of the cell.

In blockchain terms, you could conceive of merge mining as extended uncle resolution. In the GHOST Protocol, the individual uncle hash power is added to the winning block's score. Uncle miner still is incentivized, they get some proportion of the block reward. Likewise, people who contribute to the "losing token" are still incentivized. When you think of merging chains, you're still incentivizing a smaller chain's absorption into the larger chain. While protocols can directly implement the necessary hard/soft forks to include the rule set change of a fork, they won't have the now differentiated userbase etc.

How to Do Merges

There are two methods for potentially doing a merge for tokens (and probably more that we haven't thought of).

The first method is pegging a token A to token B.
  1. Agree on a price/exchange rate for A:B
    1. Oracle to determine price
    2. Hash power signaling/ratio
    3. Market pricing on exchanges
  2. Hard fork both protocols to have the same block + rule set
    1. Enforce a specific block height for the rule change, include the pegged price ratio
    2. Price converges
  3. Before the rule set is implemented, people are free to trade out of token B
  4. Allow for atomic cross chain swaps
    1. Using Decred or 0x → hard code this into the rule set change
The second method involves one chain "absorbing" the value of the other. Meaning that token A remains and token B is never used again.
  1. Agree on a price/exchange ratio for A:B
    1. Oracle to determine price
    2. Hash power signaling/ratio
    3. Market pricing on exchanges
  2. Acquire buy out funds for A to purchase B
  3. Post a public address where all B tokens can be sent to
  4. Before the rule set is implemented, people are free to trade out of token B
  5. Burn the B tokens, each B token holder, will get the amount of agreed upon amount of token A proportion to how much they sent to the specified address

Roadblocks to putting this in practice.

Both of these scenarios involve a lot of coordination. Imaging trying to do a protocol merge without some kind of explicit voting mechanism other than hash power signaling induces a headache right away. The future of decentralized governance will definitely play a large part in how these things happen.

Also, as we see in centralized mergers and acquisitions, the larger company often has to purchase the shares of the smaller company at a price premium. We'll have to establish a better pricing mechanism beyond hash power and other matters. Ari Paul and Chris Buniske have been doing a lot of great work in fundamental valuations for this.
Additionally, atomic cross chain swaps are not the only potential way to transfer a token from one chain to another, using a protocol such as Polkadot or Cosmos we might allow for this sort of thing as well.

Real World Procotols That Could Benefit.

These wouldn't just have to be currency tokens, you could potentially also merge utility tokens as well. For example, looking at Sia and Filecoin. If Filecoin were to establish a dominant market cap and share position, it might behoove them to purchase the Sia network. An additional step would need to be taken. Individuals would need to, before they can acquire any of token A, transfer their files over to the new blockchain. Once this is performed, they can claim their Filecoin token.
  • Small cap token mergers
  • Prediction markets (Augur and Gnosis)
  • File storage markets (Filecoin, Sia, and Storj)
  • BTC variant mergers (BTC, LTC, BCC)

----

Look here as well