tag:dillchen.posthaven.com,2013:/posts Dillon Chen 2021-04-11T20:36:19Z Dillon Chen tag:dillchen.posthaven.com,2013:Post/1298994 2018-07-09T23:00:00Z 2019-03-01T17:37:02Z Blockchain Based Control and Safety of Artificial Intelligence
Buzzwordy title alert.

Although there were many individuals worried about recursive self-improving AI, the alarms weren’t really sounded until Nick Bostrom wrote Superintelligence. For those readers who are unfamiliar with why superintelligent AIs, AGIs for short, might be scary, they can look at my notes or this post here. Long story short, an AI that is vastly more intelligent than us that isn’t aligned with our interests may decide something that isn’t in our best interest. 

The oft-quoted example of AGI, aka superintelligent AI, gone awry is the paperclip maximizer. While this example doesn’t exactly capture all the nuance, one can get the gist of the problem. An AGI is created whose sole goal is to create as many paperclips as possible, since it’s so good at its job, ends up killing all humans and turning all matter into paperclips. A more “human” example of an AGI gone away is a corporation, aka Enron or any oil company. Cash flows and profit, the internal metric of success or objective functions, they use becomes divorced from their original purpose of creating a good for society. Bitcoin and other cryptocurrency networks also represent some kind of recursively improving organism with no clear point of disconnect and have some individuals worrying about blockchains and AI. AGIs gone away would represent the principal-agent problem on steroids. You could well argue that Bitcoin, or cryptocurrencies are a version of this paperclip maximizer, especially the Proof-of-Work variants. 

The basic assumption that researchers in the field make is that AGI is going to happen someday. If not 15 years away, less than 100. 100 years in the course of the universe is nothing. Therefore, solving this problem of defining an objective function, or guardrails for an AGI is of the utmost importance. Sadly, this isn’t quite incentivized today. However, the work that has been done can be summed up as such:

  • Alignment: Making sure its objective function doesn’t kill us. Work that I’m most familiar with is coherent extrapolated volition and approval directed agents.
  • Capability restraint  For example, an AI that is air-gapped from the internet can give just yes or no answers, aka becoming a genie.

However, Bostrom presents another idea on AI control that I think doesn’t get enough coverage. In a few short words "make the objective function tied to the acquisition of some cryptographic token". While this seems unintuitive at first, it becomes akin to us trying to earn money, or dogs doing tricks for doggie treats. In the original proposal, Bostrom proposes to use a centralized cryptographic token managed by scientists. Superintelligence was published before this current hype cycle as well as theoretical work on new cryptographic primitives had begun. During that time, there’s been a little bit of fervor over how blockchains can positively increase the capability of artificially intelligent systems such as Computable by providing more data sets, not much has been written about the safety side.  (No surprise there). Here are some specific high-level proposals that can be stacked on top of each other to control and align agents.
  1. Use a decentralized cryptocurrency as reward function. This one is straightforward enough. Using centralized cryptographic tokens as the goal suffers from the same reason that centralized cryptocurrencies didn’t take off. They introduce the same single point of failure. If a scientist is somehow held at gunpoint by an AGI, he or she will probably hand over some tokens. It’s much harder to hold a network of miners and anonymous token holders at gunpoint.
  2. Instantiate an AGI as a DAO. This allows this entity to operate trustlessly, which is a double-edged sword. This allows the AGI to sustain itself and operate with or without supervision. But it also keeps an auditable trail of where and when the objective function cryptocurrency was added to the specific address. 
  3. Define reward function as a smart contract to be executed trustlessly. This is where it starts to get a little harder to conceptualize. We can state in plain English what something is. This matters for reinforcement learning agents. Objective functions in terms of Starcraft or Go, are simply to win the game. However, we may want to iterate/check up on the operation of the AI, and update an objective function as we go on, and not let the individual agent be able to change any part of the objective function. Then, use a widely distributed governance token, so pseudonymous actors can allow for changes to this governance token. Keep identities private so that the agent isn’t able to harass/bribe them. Monitor past voting behavior, by adopting a trail of “reputation” for voters to check for any bribery, this can also be determined on-chain.
  4. Use curve bonded tokens to get rid of “take over attacks”.Curve bonded tokens have programmatically defined prices for minting and redeeming (and then burning) a set of tokens. To perform any goal, the agent is probably going to have a lot of cash on hand. What if he tries to buy up a supply of governance token? That would be bad, then it could change its own objective function. To prevent this, we can set the curve for purchasing tokens at an absurdly high price as more tokens are minted. Corresponding, we can set an extremely small sell-out price to disincentivize any sales.
  5. Use TCRs (or some other game theoretically sound) ranked lists to tokenize “human values” and direct an AGI to optimize for that set of “human values".The previous example talked about defining a goal in terms of ETH held. That would be easily calculable if the goal of the agent was to maximize the NAV of its investment portfolio. However, as we know today, defining something just in terms of money can lead to some perverse outcomes. If the means of money become the ends, then that leads to greedy short-term actions that can be taken by agents.
  • Instead, we might want to optimize for human well-being. How do we define this on chain, so this measure can’t be hacked by an autonomous agent? We utilize decentralized stake-based rating games, namely TCRs with a curve bonded token for staking. You can read a little more about TCRs here.
  • Back to representing human well-being “on-chain”. First, we have to define how this is defined in the real world. Various NGOs and ratings orgs track things like the HDI, Human Happiness Index, and GDP per capita. These are top line objectives that countries may try to aim for, through actions that make individual citizens happy. Of course, countries are free to ignore these ratings as well. However, autonomous agents won’t be if their objective functions are locked down.
  • So how does that tie into the blockchain? These indexes have a large self reported component right now, and TCRs are good for encoding intangible and subjective information into hard economic terms. By creating this list that might be composed of “happiness”, “wealth for humanity”, and “sugar, spice, and everything nice”, we might have the agent take off-chain actions that benefit humanity.

The largest points of failures would seem to be the voters, especially if they have their identity revealed. Perhaps we can have less intelligent agents that vote on issues for the most intelligent agent, each with their own objective functions that need to be modified. With any organization or incentive structure, there always needs to be a balance between being able to change something and not letting the wrong actors change things. I think this game is especially fun to play when thinking through an actor that is vastly more intelligent than I. 
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1298990 2018-07-02T03:08:35Z 2018-07-21T12:48:48Z Inadequate Equilibria: Part 1
The world can be a depressing place if you look at the second-by-second news cycle. If you were to have a hundred year newspaper, it’d probably be mostly good news. Of course, we live in the second by second cycle. Much of life is predicated on living in and interacting with these relatively broken systems. This is something that’s played out throughout history. There have been proposed answers to these things from exit—Thoreau’s Walden and those who choose to live off the grid these days—to the revolutionary—communism and socialism to completely overhaul the system. Inadequate Equilibria is in the same vein as Freud’s Civilization and It’s Discontents.

As a human being, I think it’s critically important to contribute to this project. There are individuals that choose to exit the system. The Airbnb host, or the person that just lives off a 4% drawdown of their existing assets (aka a retiree at any age). Of course, indirectly, you’re slowly allocating capital to the right places, but this could be done much better.

As a startup founder and investor and general human being, much of my time in life is spent on these problems. It’s weaseling our way to find something that makes a big difference in the world. Many of the systems seem so hard to flip/change. While this is something that we know intuitively, In a quick 170 pages, Yudkowsky characterizes this in a clear voice without resorting to throwing up his hands. 

Starting from a theoretical basis, he seeks to answer the question, “why are so many aspects of the world not optimized to the limits of human intelligence in the manner of financial prices?"

Depending on your perspective, pouring all of our collective human intellect into optimizing finances over a short-term view could be heartening or disheartening. It can be disheartening to see so much of talent under-allocated to efforts that don’t seem to produce end results. After all, no one likes paperwork. Doubly so, no one likes paper work that doesn’t mean anything. Triply so, no one likes paperwork that was created by “the system” or “the man” and you have to adhere to it or else you won’t be able to eat, but “the man” won’t be able to eat so you can’t even change the system. 

The book can loosely be divided into three sections. This post is the first in a series.
  1. Laying out the meta concepts of inefficiency, inexplicability, and inadequacy
  2. Inadequate equilibria in all areas of society (to be published)
  3. Adding inadequate equilibria to your mental toolbox and life (to be published)
Much of technology can be characterized by the attempt to bring adequacy to human endeavors. Once upon a time, markets were so inefficient with respect to information that Ben Graham could make money buying stocks based on the market value being less than book value. Now that markets and systems are inadequate with respect to our wetware and incentives, these can be many stickier and harder to change. Of course, Charlie Munger has also often noted that he’s underestimated the power of incentives.

This book is a master course in rationality, society, and how to act that I think it deserves a separate post for each section. I’ll cover the first here.

“If I had to name the single epistemic feat at which modern human civilization is most adequate, the peak of all human power of estimation, I would unhesitatingly reply, “Short-term relative pricing of liquid financial assets, like the price of S&P 500 stocks relative to other S&P 500 stocks over the next three months.” But why? We’ve often considered financial markets the nervous system of the economy, the best way to relay information.

Allocating capital allows equities to have a lever.
  • Lower cost of capita RoE + RoIC, better capital structure, easier to retain employees because bonuses are worth more, can leverage stock price to acquire new companies.

Elizier introduces the concept of modest epistemology. The later debunked notion that you should trust the expert view most of the time, unless you really have an opinion/have put in the time. Often, this is the most social status oriented view of the world. 

Last, he introduces his self-treatment of his wife’s SAD (Seasonally Affective Depression) treatment as well as his dandruff problem.

With these three examples, he respectively illustrates the notions of inefficient markets, unexploitable markets, and inadequate systems. 
Type Example With Respect to
Inefficient Apple???/An Equity The average person. HFs can still take advantage of this, gathering specialized information.
Unexploitable
Shorting Real Estate/Bad Monetary Policy (Japan Example).

It is adequate to the point where there are not a lot of underpriced houses because you aren’t able to short a single house.
No financial product exists to short things. CDS/funds can take advantage of a systematic level
Inadequate Current State of Venture Funding, Colleges as Credentialing systems and the US Healthcare industry. The normal “in-game” view, God’s View or benevolent dictator can overcome

All these classes of markets or systems are adequate with respect to something. Markets are efficient relative to the average individual but not to hedge funds. The average investor isn’t able to find alpha, because changes are not predictable.

The view presented is that markets and systems have predictable movements in prices until they reach some equilibrium point, the so balance of supply and demand. Each individual agent is trying to sop up as much “free money” in the form of predictable price movements as they can. While inefficient markets are systems where the price can be the sole signal of value, inadequate systems are more complicated. Each agent within the system is trying to fulfill their own incentives. Whether that be striving for fame, curing individuals of diseases, their behavior is shaped by incentives. And right now, these incentives are out of whack. The ways in which they get out of whack are collectively known as “Moloch’s Toolbox”. (Sidebar: if you don’t know Moloch, then I don’t know you ;).) Collectively, these are the the tools below:

  • Principal-agent problem (people who make decisions who don’t benefit)
  • Information asymmetry
    • Example: Colleges act as a filter for 1) hardworking kids 2) smart kids
      • But 4 years and $250k is a lot to prove that...
    • Common knowledge -> how to things settle in this point
      • it’s a signaling/asymmetric information problem
  • Nash Equilibria of misaligned incentives, and not Pareto Optimal
    • Two-factor markets and signaling equilibria
    • Pareto optimal -> a single move that makes everyone better off.

Using these as a lens with which to view society is pretty powerful. But the real question is how we can break the grip of these equilibrium points? Usually, it’s come in the form of billionaires or those with the requisite skills + luck to reset the systems. Examples of these reset points might be Bitcoin and SpaceX. They both act as a reset for the systems that they are compared against, centralized banking systems and traditional contract-based space agencies.

While Moloch’s Toolbox is extremely simple, there are different counterarguments against it. On one hand, you can say that everyone is self interested and things won’t change because of that, a “cynical economist” view. Or, you can refer to the view that systems are bad because people are bad, and people are just bad at coordinating, a more nihilistic view. Either way, if you’re a startup founder or trying to change the world for the better, you’re fighting against multiple forces, the “system”, the “haters”, and the “cynics”. The problem is that the combination of those three forces make it quite hard to craft a clever solution. You can’t just build a better mouse trap and hope people will come, instead you need hit some critical mass for different stakeholders to “flip things". That is incredibly hard, but extremely worthwhile.

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1294336 2018-06-15T14:45:58Z 2018-06-16T18:57:14Z Early Adopters of Crypto
Attention is the most scarce thing in the world. On a macro level, the world is awash in capital. Interest rates in countries are below zero. However, within our daily lives, there are always thousands of things competing for our attention. A question I like to think through is, where are the early adopters focusing their limited attention. Chris Dixon says it’s people messing around in garages building something. A revised question along those same lines is: 

Which nation/market is an early adopter of technology? How do their market dynamics predict what might happen in another geography?

First, a little theory. The world is a connected graph of people. Word of mouth is the thing that really gets people to adopt products. Facebook decreased the six degrees of separation down to around 4.5. However, among this distribution of connections between people and connections isn’t even. When we think of information flow, it’s more of a uni-directional graph. This means that person A can influence person B, but not usually not vice versa.

When we think of how information spreads, I think of a tinder over a dry terrain. While something doesn't spark 100% of the time, but when it does, there's the potential for a cascade of "catching fire". Within a network, there are early adopters and late adopters. These people are differentiated by personality traits, sources of information, and levels of connectedness in both the upstream and downstream direction in terms of where they get their information. I usually split the adoption curve into three sets of people:

  • 1) people who do things because it is novel or cool. This is an intrinsic motivator. These are early adopters.
  • 2) people who do things because there’s an economic need. This is an extrinsic motivator. These are middle adopters.
  • 3) people who do things because everyone else is doing something. These are late adopters.

So now that we have that out of the way, this is my current mental model for crypto adoption.

I am increasingly looking towards Asia for technology and more specifically Korea for cryptocurrencies. Due to special features in what their graph looks like, they have interesting winner take all dynamics as well as being early adopters. Information spreads quickly because of the connectedness and centrality of its social graph. The whole nation using Kaokao, has high-speed internet access, a high appetite for novelty and coolness, very tight-knit business communities, and have historically been early adopters of new technologies. Before the States got around to these things in Web 1.0, Korea was already on top of camera phones in the early 2000s, playing MMORPGS and other things, and over the top streaming (aka Netflix).

Bill Gurley and associates caught onto this trend and planned a trip to Korea to see what might be gleaned from this market. What resulted was a sharpening of their thesis around Social, Local, Mobile. When the iPhone hit everyone’s hand in 2008, we had the confluence of the internet, GPS, and camera in every pocket. And the rest is history, that Benchmark fund invest in a plethora of internet hits most notably Uber and Snapchat.

The current environment for Korea is pretty telling. 30% of South Korea owns or holds some sort of crypto, past the tipping point for widespread social adoption. When the regulators tried to shut exchanges down, HODLs raised their voices. I’m excited to see how individuals interact with token powered protocols as usability and scalability allow us to fall down the Marginal Benefit Curve of cryptocurrencies. While we’re still stuck at the store-of-value and the speculative era of cryptocurrencies, that should change soon.

Even now, as staking protocols begin to proliferate, crypto holders are looking to gain an edge in earning incremental token. We should start to see use Vest and Compound.Finance gain adoption as the usability of protocols begins to drop.

I’m personally not as bullish for developing countries as leading indicator as early adopters. As weird as it sounds, they need cryptocurrencies too much. My mental model for early adopters are the ones that like toys, the weirdos, the rich people and more that are willing to accept the flaws in the product. There’s something about intrinsic motivation as opposed to extrinsic motivation that drives the sickness and retention of a product/technology. I would much rather look towards the high-risk tolerance ICO investors than look towards traditional business and crypto “enterprise alliances”.

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1285829 2018-05-21T06:02:29Z 2018-05-21T06:07:24Z Research Coin v3


At this point, the thoughts contained in this post is quite old. However I wanted to publish this as I've been tinkering around with a new formulation of a protocol. It comes as an extension of thoughts by Nicola @ Protocol Labs. 

Research is really expensive, a public good, and has nastier power law returns than startups. The graph above shows revenues generated by patents, the step that comes after publicly funded research. It took  10k patents produced at Northwestern at a yearly cost of at least $675 million dollars to produce one patent with licensing revenue of $1B/yr. That's a cost of $67,000 per patent to get this holy grail.

Bell Labs spent over $10B in inflation-adjusted dollars on research and brought together the most incredible minds in an incredibly productive environment. Some of the end results include the transistor, which we all can thank as the earliest baby step for you reading this article.

Today, academics and funders complain about the misalignment of incentives for funding science. That's a discussion for another day. 

What we'll talk about today is a potential mechanism to fund basic research at scale. We want to do this to produce the research needed to generate these valuable patents. As well as rewarding the scientists, the individuals who actually generate ideas.

The core idea is recursive payments and ownership instead of just betting on getting accepted into a conference or something.

Units of the research coin system:
  • Paper 
    • Papers have owners
    • Papers have citations to other papers
    • Papers each have a token. This token is distributed to the owners of this paper, and to citations of other papers in the mechanism described below.
  • Owners are types of people. Can be an individual contributor, or an organization like MIT or something else.

Why staking and markets instead of social style peer-reviewers?
  • Peer review and prestigious journals are proxies for the long-term value of a paper. If we develop a market around each individual paper’s value, then this might be a good way to get rid of social gatekeepers of conferences and journals that incentivize “flash in the pan ideas"
  • There are already too many papers out there. Staking might open the door for algorithmic researchers as well as peer reviewers. 
  • It could induce more reproducibility studies since they are valuable but don’t get published in the flashy magazines if people can figure out a way to capture value in that (by shorting the weak paper???), or by purchasing a share

Why papers and owners instead of organizations?
  • Owners can be anyone who is holds an interest, perhaps an author or organization or something else (a DAO)
  • Token could flow directly to researchers, who may do better jobs of funding and finding talent rather than bueracratic organizations.
  • Should owners have a token that people can purchase???
    • A market for organizations???? This may be out of scope for this, as it seems that organizations could just be wallets, and people could potentially own a share of these if they wanted to.

Why a token?
  • Tokens help align value. They establish a clear unambiguous signal for a paper's value, while citations are social and a little bit messy (vanity citations, you scratch my back, I scratch your kind of thing)
  • Seems like this is a utility kind of thing. 
  • Maybe you could just use ETH to place bets and do payments, but it seems like you should definitely have some research coin for governance and staking.

Would people invest in research coin?
  • Researchers need to purchase or use research coin to review a paper.
  • People would put money into research coin because it should be a better science/provide more socially useful results than what is currently performed right now.
  • You purchase research coin because you think it produces a better body of science

What's the mechanism?

  1. Research coin distribution event (ICO? or do airdrop to researchers and other stakeholders)
  2. Each paper, when hits preprint server starts a game. Perhaps authors of papers have to stake money as well???
  3. Owners stake research coin so they can peer review this paper.
  4. We gather a set pool of staked money for the paper. They decide on the validity of the paper. And how to initially distribute the distribution of said paper’s individual token in proportion to owners and citations to other papers. This is some type of Schelling point game.
  5. Accurate Schelling point people are rewarded with some new research coin (in some proportion to how much was staked). Slashing the stake of bad reporters.
  6. Once paper Schelling point is set, then distribute locked research token recursively to owners of said paper’s token with pro rata of Schelling point. Intuition is that peer reviews want to review important papers and therefore will stake tokens to do this. More important papers accrue more staked token. More token flows recursively to the owners of the paper’s token. I guess this is technically securitizing basic research IP, lol.
  7. Markets develop for individual paper’s token. These may, later on, yield great research results and therefore generate recursive payments of research token. As more papers get published, flow of money goes recursively to the paper parent papers/owners. The price of the token’s paper, denominated in research coin may 
  • Recursive ownership is important because it incentivizes research with the greatest NPV in terms of research coin.
  • Researchers who publish should get steady payouts as more papers cite them, so they can continue to fund more research.

Additional thought
  1. Bounties for research can also function within this space as well. i.e “I am putting X research coin up for grabs if you can solve this problem and these people can verify it’s validity”
  2. Seems a bit complicated,

Related


]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1240531 2018-01-31T00:58:33Z 2018-01-31T01:14:18Z Economic Returns of Casper
I recently set up a small mining rig to mine Ethereum. My housemates have audio-visual evidence of this. My first thought was to mine Ethereum. However, the big thing looming over this particular foray into hardware is the switch to Casper, Ethereum’s new PoS. When that switch happens, I want to start staking ETH and participating in that consensus protocol as well.

On ethresear.ch, you can find active discussion spurred on by Vitalik and Jon Choi on the potential economic outcomes of the switch and how they might drive monetary policy.

We can look at the current rate of return on PoW mining right now. While the profile of stakers vs miners may be completely different, I wonder if the total deposit level will be adversely affected. We may have fewer deposits than we have posited, somewhere between 0.1% - 0.5% of total deposit (TD) or 60 - 300M of USD worth of deposits. I arrive at this conclusion now by looking at the current rate of return that miners get compared to what is being discussed on the site. At the current return, we’d only have about $300M in staked deposits. Which feels quite low to subvert a $70B dollar chain.

The market-driven rate of return for the consensus protocol driven returns in relatively stable protocols (BTC, BCC, ETH) has remained relatively high in comparison to ranges shown in the Google Spreadsheet that Jon has shown. The range between 20% (Equities) and Multiples (Cryptos/startups) is very large. The current PoW yield is closer to a startup's risk-reward profile than an public market equity, with an estimated yield of 150%--back of envelope math below.

Given the current hash rate, factoring in fixed, variable (electricity), and non-recurring engineering costs such as physical space to find the current yield, outside of appreciation. Right now, given the price of ETH, it’s pretty damn profitable to mine. I arrive at an estimated yield of 150% per year. The total cost of the network including the aforementioned costs is $3-$5 Billion. Of a security to network value of 5%.

This checks out as well, given that the rate of return on a single GPU is around 7.5 months for payback period for an NVIDIA 1070. 

It seems like we might see a much smaller TD Ratio given the market rate of return on mining now. Given the stated target inflation rate of 0.5%, I’m afraid we might see a much lower participation rate given the modeled yield. PoS with the 4-month lockup is seemingly based on the same risk/reward and liquidity profile of PoW. PoW is potentially even more liquid given I can start mining on some other token if the price of token drops. Of course, the biggest driver of this is perhaps that returns from HODLing have been so extraordinary right now. After all, we know the price of ETH has basically 100x YTD. When the returns for crypto assets start to stabilize, we might see PoS return to being a stable source of returns 15% - 20% not including appreciation of assets seems pretty good [1]. 

  • Looking at current hashrate gives ~150000 GH/s. an NVIDIA 1070 GTX gives ~30MH/s, so there are approximately 5,000,000 GPUs working to secure ETH. These GPUs each cost $500. If we estimate that overhead expenses are 1.2x of per GPU cost, we arrive at an all in fixed and NRE cost of $3 billion dollars
  • If each GPU is pushing ~150Wh, Electricity costs are 5,000,000 / 6   * .05 * 24 * 365 or 7,300,000,000 to 10,550,000,000 kWh/y. An all in electricity cost per year of $365,000,000.
  • Taking that into account we have $3 - $5.19 Billion / $30 Billion network or a TD Ratio of 11 - 16% 
    • We’re paying out $3,858,570,000 in USD, or 12,861,900 token per year, 13.83% at $300 per Token
    • Yield of 74 - 100%
  • At a $700/ETH price, we have $5.19 Billion / $70 Billion network or a TD Ratio of ~4 %. This is with the current inflation rate of 15%. Miners currently zero out at 7.5 months, so 4.5 months of profit. This gives a yield of 165-55% hmmm…
    • We’re paying out $9,000,000,000 in USD, or 12,861,900 token per year, 13.83% at $700 per Token

[1] https://ethresear.ch/t/casper-validator-yield-as-a-function-of-td-and-issuance/222



]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1236879 2018-01-24T01:00:02Z 2018-01-30T06:39:29Z Crypto's Ladder of Abstraction
Like all good blog posts, this one starts with a tweet. In this case, I can point to Nicola for spurring this one. Niraj and I previously collaborated on a post called “Merging Chains”. You can think of this in the same spirit as that post.


Portability has been a great side-effect of abstractions for computation, higher-level languages and will have the same effect for the decentralized world as well. In the centralized world, we have Dropbox, Google Drive, and Evernote. These all let us take our information wherever we want it. Whereas before, the existing model was a thumb drive or involved clunky data transfers. The internet helped pave the way for user side abstraction. When we wanted to upgrade devices, we didn’t have to worry about our data. On the dev side, we’ve seen the evolution of serverless. Preceding that were IaaS plays, namely AWS, and preceding that you had to rent out hardware and co-locate a hardware. 

Right now, a lot of effort is spent building on a base layer Turing(ish)-complete stack-based machine like Ethereum. While Ethereum remains a market leader right now, things might change. A 0day exploit might appear, someone very influential might die within their organization, or a switch to PoS might actually prove to have a bad security model. Those don’t necessarily reflect what I believe but rather are stated to show some “existential” type risks that might compromise a base layer protocol.

In theory, a mature dapp built on top of Ethereum, shouldn’t derive much of its value from the security model of Ethereum. In theory, a dapp should be able to move its contract state to another base layer protocol. Another way to look at it is again through the lens of history and greater abstraction previously mentioned. A user doesn’t really care whether Dropbox uses it’s own servers or is hosted on AWS. Of course, they care about getting their information lost or stolen, but that’s up to the developers to worry about.

As mentioned previously, developers today don’t have to deal with renting servers. Developers on Ethereum don’t have to write EVM bytecode either. We’ve already seen people build on different platforms. Kin moving to Stellar rather than building on Ethereum, at least initially. I do have a gut feeling that the switching costs may be less than people think, especially since new base layer protocols are taking the tack of enabling the EVM already, like RSKSmart. Also, the Ethereum state trie is already publicly available, and that lets people do airdrops and such, like EtherMint.

And of course, Ethereum abstracted away the messy world of bootstrapping your own blockchain, secured by miners.However, as we build upon this world of abstractions it’s easy to forget that these build on real components, while you can write in a high-level language, your code is still executed by self-interested miners, and that leads to interesting side effects and security concerns.

Ryan Shea and co spent time thinking about migrating state for onename, so this isn’t a thought that is completely out of the blue. Of course we’re seeing protocols such as Cosmos, Polkadot, and aelf now being presented as partial scaling solutions. Hopefully, they'll allow protocols now built only on Ethereum to work on other base world computers with ease.

In the formally-verified future, dapps and protocols will compile down to multiple VMs. Users and developers might not have to worry about a break down in the consensus mechanism of any one base layer protocol. A “meta”-token that wraps both the native ERC20 and whatever the token specification is for another base layer protocol will exist. Maybe these token prices will be pegged to each other, Or value will be accrued in proportion to the amount of state that they actually keep. In this way, the different base layer protocols may just be different shards on which protocols interact. Already, some tokens are looking at building on both Ethereum and NEO

If this vision of dapps on multiple chains does play out, competition between base layer protocols based solely on the dapps that they host may not be a long-term competitive advantage. Again, that hypothesis is premised on the belief that switching costs of state are low, and it does look like that is happening. If a protocol advertises the fact that they have a competitive advantage just because they’re building on a certain VM, that isn’t going to be a long-term advantage.

I don’t really offer up much in the way of analysis but just a bit of observation that we’re in the early days of crypto. There are so many rungs on the ladder of abstraction yet to be formalized and built. It's not immediately clear how scaling, where the points of friction and therefore economic value will be long term. You might say that the tokens that have the largest network effect will win out, i.e. Ethereum. Yet, the network effect argument is self-referential. It is intrinsically so, the more people use it, the better Ethereum gets. But the more people that leave an ecosystem, the more unstable it gets. With flows between addresses on chain and economic value exchanged cross-chain all available in real time on decentralized exchanges and on blockchains. We could see, in real-time, the shift in network effects from Ethereum to a hypothetical competitor. We won't have to wait for Facebook to release it's latest earning's report to show that it churn some X% of users. Please talk to me if you think I'm right or wrong :)
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1203909 2017-11-08T00:37:01Z 2021-04-11T20:36:19Z So What's in a PhD
I remember watching Dragonball-Z, where Gohan’s mom, Chi-Chi, wanted him always to get a PhD. This really hammered home of getting the importance of the credential, the PhD, to be recognized as an expert. However, since that time I’ve become somewhat of an autodidact that learns just for the sake of. However, I recently tweeted this:


The response was surprising. However, I stand by the statement. I first stumbled across this quote while reading "The Mathematical Experience.” The "80 book benchmark" shattered the final remnants of the childhood illusion that you need a PhD to become an expert, a PhD being some mystic level of achievement. In its place now stands a new belief, that becoming expert-level is not that hard. It’s a concrete milestone that anyone sufficiently motivated can achieve.

I really like this 80 book expert benchmark because it has all the classic signs of a good goal. It’s measurable, achievable, but still decently ambitious, especially if you love books. Becoming an “expert” is not that hard, especially if you don’t need the credentials. And thankfully, if you work in startups or are creating something, credentials are not that important. If you really do need credentials, you can always hire someone with the right three letter acronym.

80 books, while seemingly daunting is not that bad. An average US working citizen spends almost an hour commuting to and from work. If she decided to use that time instead of ‘gramming or texting and instead read, she’d be able to get through a decent amount of a book per year. For a book printed with normally-sized font, a reader of this blog could probably read a page per minute. This includes the appropriate highlights made in-text for subject matter retention. That means you, dear reader, could probably finish an average-sized book per week, ~360 pages or ~50 books per year. You could get a PhD in 2 years with time to space! [1]

The eighty book-mark is also great because it illustrates how little knowledge an individual needs to know to become an expert. Within startups specifically, the low barrier to becoming an expert makes investment decisions in “inexperienced” or “young” founders less risky than they actually are. I’ve already written on how young founders often found the biggest, baddest, and best companies. If you believe the thesis of this piece, then being young is less of a disadvantage because it’s so easy to get up to speed in an industry.

Expert-level specialization is very real and necessary. Even a small town library will usually have at least a few thousand books waiting to be checked out. If we only know 80 books worth of knowledge, it’s hard to imagine how you’d be able to build a multi-faceted business. Also with knowledge expanding at an exponential rate, it seems even more daunting. This is one of the reasons why being an expert or getting things done in the world still requires you to collaborate with others and/or use tools to manage knowledge.

Of course, the 80 book goal doesn't cover all the nuances of being an expert. On Twitter, others brought up several counterpoints. First, books aren’t always the best source of knowledge. I think this is certainly true. To the original goal, I would then add the caveat that, you do need to read 16000 pages—or 80 books worth of material at 200 pages per book. This is especially true in fast-growing fields such as blockchain or AI, where preprint, blogs, and Twitter. Where you choose to get the 16k pages certainly makes a difference in what you learn. The best practitioners are often the ones that aren’t teaching the subject. Their knowledge is either much more implicit, or codified in a much more free-flowing form factor such as a blog post. Take, for example, some of Vitalik’s writings on cryptocurrencies. If you’re getting into crypto, his posts will serve you much better than any book proclaiming that blockchains are the second coming of the internet.

Another common retort to “80 books" was that being an expert is mostly about creation. However, I’d say people still need some base level of knowledge to be able to be productive in a field, and as we’ve established before 80 book-length pieces of information or 16,000 pages or two years of learning seems about right to me. You’re probably familiar with the “Whartonite Seeks Code Monkey" or “I can handle the business side" meme pages. In short, they both poke fun at B-School students who don’t really understand the mechanics of product or startups. When I first read TechCrunch and watched the Social Network, I 100% asked a technical friend of mine the same questions. I didn’t have the requisite mental models on what a “startup” was to know why this was a bit silly of a request. Yet after reading blogs, working on products, and talking to folks to get the implicit domain knowledge, I now do. More generally, understanding the domains lets you know what's at the "adjacent possible", the stuff that's hard enough that no one's done it yet and not impossible. In physics this would be the difference between working on gravitational waves and working on time travel.

I look forward to getting my PhDs in bio, brains, and blockchains soon :)

---
[1] The speed at which a person reads will definitely depend on the subject matter as well. While reading Molecular Biology of the Cell, I read at approximately 15 pages per hour while taking detailed notes. At 1000+ pages, MBOC would take me ~70 hours to read it cover to cover. A normal college-level bio class probably covers half the material in the book. So I could a semester in ~40, or a normal work week. Of course, the caveat to this is that I can read 8 hours per day… Of course, I don’t, but for sufficiently motivated individuals who find the subject matter at hand interesting, you could do it. Warren does it.
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1202584 2017-11-01T20:44:02Z 2017-11-01T21:14:16Z Some Thoughts on "Confessions a Sociopath"
While browsing bookstores in NYC, I stumbled across a striking cover. A porcelain mask. Female. Red lipstick, with the attached popsicle stick handle. My eyes wandered down to the title in the bottom left-hand corner--“Confessions of a Sociopath”. Intriguing yet hesitant, as I don't normally read pop psych, I picked the book up. I put it down twenty pages later. I didn’t purchase it. It was a little bit too spooky for me. When you're left with a new lens with which to view your friends, colleagues, and possibly self, you’d feel the same way.

I ended up purchasing it at another bookstore later in the same day.

M. E. Thomas, a pseudonym, writes in an extremely readable transparent style. The compact volume of three hundred or so pages reads a bit like a diary, which is what a sociopath would like you to want. We want to feel like we know the other person. Yet, true to her sociopathic nature, the prose is lightweight, easy to reach, and a bit detached. Just what we’d want in a fling, to be drawn in, to imprint our own desires onto, and to be left wanting to know more. An early moment that we experience is mom and dad driving away forgetting us at the park. A moment that “normiopaths” or “empaths" would regard with fear, tears, or some other visceral reactions, M.E. takes as a chance to prove that she can live without them. M. E. reveals nothing, and with this style, she draws us into to her inner world. 

We follow M. E. as she navigates growing up in a somewhat dysfunctional household, and matures into a beautiful, intriguing, and cold young woman. Some of her experiences as a child, I think readers may be able to relate to, especially if one were an outsider or immigrant to a new community. When you come in as an outsider, there are cultural norms, language cues, body language differences, inside jokes picked up innately for some and intentionally learned by the outsiders. The difference here is that, for M. E., the language to be learned is that of emotion something we might take for granted. The only strong desire she expresses is that of power, for control over her environment and all the people around her.

We discover her how she manipulates people around them, often without them knowing, learn, as she does, that emotions play no part in her mental world, and rules that don’t advantage her can be broken. We’re often reminded of rebels, criminals, and vampires, the darker archetypes of our mythology—characters with which we are enthralled with, at least in the aspect that they have freedom from internal and societal retribution. By continually drawing on examples from literature, particularly from Steinbeck, we’re reminded of favorite characters and perhaps about people in our own lives that fit this sociopathic mold. Not only does M.E. draw from these sketches, but she also draws from brain imaging and clinical research, as well as, clinical definitions for psychiatry. This gives this extremely transparent, personal narrative the touch of scientific authority without being too drawn out.

The worlds of work and love figure heavily in this book. Sociopaths, as we learn, turn out to be tailor-made for corporate capitalism. Money, that impartial thing that so much of daily life is centered around, is a sociopathic object. It can be transformed into whatever desire that we may hold. Within jobs that require stress, acting, or even normal office politics, sociopaths are able to lie and win their way into higher and higher positions. They're better able to deal with the stress of firing or launching a new product better than we can. However, we find through personal anecdotes that a cutthroat character isn’t always as good as it seems. The same impulsive behavior become less reliable at creating long-term relationships needed for management positions. I often thought of Steve Jobs as a possible archetypal sociopathic CEO, driven by a great product, through a path of scattered emotional breakdowns.

We later turned to the subject of love, and as noted before, it’s more than tough to maintain a long-term relationship because the default position is to be what your lover wants you to be. But as we know, vulnerability, that is being your true self, or at least acting and speaking as if you don’t have anything to hide, is the key to long-term relationships.

When we look around the office, or our college campus, or even in our loved one's heads, we often wonder what is going on in the behind their eyes. In a certain way, while reading, I was reminded of the Turing Test —how can you tell whether this thing, producing some output is intelligent (and/or) conscious.  To extend the metaphor, the Turing Test is for emotions. "Do I actually know what this person is feeling at this moment?” It’s a bit frustrating. There will always be that lack of understanding that we always will face when dealing with people, just because we haven’t lived the exact same experiences as them. 

How do we know our lover’s smile is genuine? What if like a chameleon, our lover may be producing this contortion of facial muscles to provoke the response we so desire? The ends that they want may not be just to please us, but they may be planning, plotting three steps ahead, using that goodwill generated from that smile to cajole us to change the channel to whatever they wanted.

At the end of the book, we’re left with M. E as she goes about her life without a care in the world, without attachment, yet desiring of a real connection wanting kids. And struck by the normalcy of it all. These are the desires that all of us feel, our mental worlds have just happened to mold our perceptions in a slightly different arrangement. Our biological drives, along with our upbringings can really make a difference in our lives.

If you read "Confessions of a Sociopath", you will wear the sociopaths mask. For some, you may that it fits your face perfectly. You may gain answers to some pesky questions that you’ve always wondered about yourself. If not, you may be disgusted and off put, but you will certainly wonder more about the man on the train with a certain glint in his eye. What is he thinking? How does he feel—if anything?

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1200475 2017-10-24T23:00:06Z 2018-11-17T03:46:48Z Biocomputers
A mostly speculative post on the far-ish future of biology.

This essay’s a spiritual success to my previous post on the subject. If you’re an investor feel free to invest with that essay’s thesis in mind :) . I’d like to take a few steps forward into the future and try to reason backwards to where we are now. I began the other essay with a comparison to the mainframe era, and I’d still like to draw on the computing metaphor.

Most people identify Intel + the microprocessor as a key innovation in the whole computing revolution. The same could be said about the Apple II, which finally incorporated the microprocessor into a consumer-ready, integrated product. I won’t argue for against either for marking a new age. Either way, those technologies were unequivocally tied together, they bookended the period where the microprocessor led the way to general purpose computing for everyone.

The integrated circuit was the culmination of billions of dollars in R&D, and today the heir to that technology is the iPhone 8, which holds some $150 trillion dollars just in transistors from 1957. These devices let you do essentially anything and were the corner stones of global communications and global money. A person could live their life with just a phone.

I wonder what set of innovations might allow for the equivalent exponential jump in biology, the microprocessor for biology. What’s the equivalent of a general purpose computing device in biology, and why would we even want one? 

First, let’s look at the definition of the microprocessor according to Wikipedia.

"The microprocessor is a multipurpose, clock driven, register based, digital-integrated circuit which accepts binary data as input, processes it according to instructions stored in its memory, and provides results as output.”

If we swap out binary data for DNA, that sounds a lot like what a nucleus does. The speed and accuracy with which we can create new strands of DNA is limited right now. Biology is, of course, general purpose. The same DNA that codes humans can be used to code algae. However, most DNA is assembled for a specific purpose. The software, the ACTGs of DNA are still way too expensive to sequence. Additionally, de novo gene synthesis and assembly, or making long DNA strands from scratch, is doubly plus expensive. While we herald a $1000 human gene sequence, and soon a $100 human gene sequence, it really needs to be close to zero. While a single base pair costs $0.02 to synthesize, this also should be close to zero. 

Why do I think $0.02 is way too high? Well, think about it this way. If every line of code cost $0.02, we would not have operating systems or any of the wonderful things we depend on today. To get to truly ubiquitous DNA manipulation the cost has to be ~$0.00000, like manipulating electrons in a personal computer.

In short, a biological microprocessor, a bioprocessor for short would be able to manipulate DNA and spit out the results, or biological and chemical components of whatever we wanted at near zero costs. An integrated biocomputer would take the inputs, the single cells, the small molecules, blood drawn from individuals, other enzymes, and return new cell the right genes inserted. Attached to main bioprocessor would be other chips such as microscopes, perturbation devices, electroporation devices, incubators, bioprinters, fluid + solid handling devices (think needles and other things), as well as being connected to traditional chips.

Fundamentally, having a digital bioprocessor or some personal computer equivalent could lower the cost of creation by several orders of magnitude. The table top sets we have today for home biology are the equivalent of the HAM radio sets. So it will be some time before we have anything really cool. However, biology holds this same property of being an information science. However, like the pre-personal computer + internet era, we had to go to separate sources to gather all of our biological material. We travel to the grocery store, we go to the mall to buy creams that are synthesized by snails, we go get surgery + pay $ to look different, we go to the pet store to get pets, even the clothes on our back are made from organic materials. If we can download creams, seeds for foods to be grown, and drug treatments, we could enable biological creativity like we have in bits.

One use case that bioprocessors could dramatically influence is human drug/medical treatment. Martin Shrekli and the latest EpiPen snafus could be avoided by at home production of molecules and treatments. If the cost of treatments are zero, then how are drugs/treatment costs to be amortized? To create a blockbuster drug today costs billions, so what happens when individuals are able to “download” medicine for free? Of course, this is a moral dilemma. Orphan drug disease gene therapy treatments cost consumers $500,000 for one treatment. That seems a bit outrageous. 

Business Models for Biology

Bioprocessors should hopefully have two first-order effects on biology--decreasing the cost of production and distribution. We just have to look to software, as we’ve seen with the internet, a radical shift in the costs of distribution has, and will continue to reshape industries. 10-1000x cost reductions leads to startups. Disrupting industries. With the internet, everything either became free, had a SaaS/APIs model attached, or birthed a marketplace. Each download or use will cost some amount, like hitting an API endpoint.
  • Music -> piracy (zero cost distribution) + lower production cost = free initially, but now SaaS model, litigious for sure.
  • Movies -> high production costs, lower discovery/distribution cost =  SaaS model (Netflix)
  • Banking -> high production/integration cost = Now have an API for this. We have Stripe.
  • Housing -> high production cost, high discovery cost = Marketplace Model (Airbnb)
The same will happen with biology. The effect on food will be different than that of pharma and that’s related to the market dynamics of production, distribution, and reputation. All these elements add to transaction cost, and as we know, transaction costs govern where fat businesses are made. Sit on top of a fat pipe of transaction costs and win money for a long time. A worry people have is drug piracy. If the cost of downloading a drug effectively drops to zero, then what happens to the dollars that need to go into research.

There are a few effects of a bioprocessor and associated peripheral devices could have on drug development. The cost of research should be way lower, allowing more drugs to come on the market, however determining efficacy will still be hard, so brands or marketplaces should establish themselves.

However, free in biology isn’t necessarily bad. People don’t always need to be motivated by monetary ends (directly) to contribute, the Debian ecosystem has had ~$20 billion of work put into free software. And this isn’t just random stuff. It runs on almost any internet connected server. We depend on it for critical infrastructure. We could potentially have freely designed seeds that are pest resistant that farmers could use instead of ones controlled by the huge pharma companies.

We might have a SaaS like business model for individuals to purchase treatments (Illumina, Gene Therapy Market??? -> have the right idea). However, we’ll have to deal with data security. Medical records are worth 20x your credit card information on black market. There is no way that I would want my health information to be hacked. A more fun SaaS business might be custom designed hair product and colorizer. First, input a strand of your hair, enter the desired hairstyle and texture, and out comes a specially designed set of creams that actually changes biological hair growth from the follicles, If we actually change the follicles, then we can change the color and texture of our hair at will for longer, cheaper, and safer than we do now.

If we go to space, we’ll certainly need and want different biological tools. Space radiation can kill, just as scurvy killed people. Space radiation can also be curtailed by 4 SNPs that potentially could be free. A digital biocomputer would be a necessary tool. We’re not going to have a lot of space on those space ships and we’re going to need to bring a lot of things. The best way of compressing things is through just information.

All these of these are possible arrangements for how the bioprocessor changes production and distribution of organic materials. But we’re sadly still a way’s away.

Today: Complexity

Computer scientists severely underestimate the complexity of even single cells. These things are really, really complex to model and build, especially if you want to get to atomic scale precision. Atomic scale precision is often what you’ll need, after all polymerase is atomically precise. It manipulates several atoms into place, and we can thank evolution for that. We only have several mutations per our few billion base pairs. To do that level of simulation we need to assume Moore’s Law continues 50 years into the future (so we’ll basically need Quantum Computers to continue that trend) to simulate one cell. For a whole brain simulation, we’ll need 100 years for that. Another example of complexity is protein structure.

We’ll either need to reduce the modeling accuracy of our systems (as we’ve done with deep learning) or use biological techniques in addition to computational models. We can use bioprocessors as a model of studying, directing the evolution of cells, of creating anything we want. On our way to a glorious biologically infused future we have many roadblocks to creating components for a bioprocessor and or personal biocomputer.

A future post will speculate in detail on 1) what a bioprocessor actually looks like 2) who’s working on this stuff now and 3) what else is holding us back.
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1199053 2017-10-18T04:09:56Z 2018-01-31T08:54:30Z Merging Chains
Written by Niraj and Dillon

If you posit that bitcoin has a network effect (the more people that use the currency to transact), the more valuable the coin becomes. The more valuable the coin becomes, the more users you get and the higher the network effects. Additionally, if longer chain history means better security and more miners mean better security, in the long run, is there a way to increase the network effect by merging chains?

Right now, we've only got people doing forks. Forks are important, they allow for experimentation on the rule sets, however, they potentially may reduce the overall network effect of any single token. Forks are really good because they align incentives between the people who have already done work on the master chain. In the example of ETH and ETC, we argue that it's a feature that the Ethereum Foundation automatically held ETH and ETC without their permission. They could gain in economic value of another development team. The new development team wins because they get an input customer set, the set of pub-priv key pairs that already holds ETH. This is a subtle shift in incentives, we'll write more about this later...

While we're not advocating for a maximalist approach, the idea that there should only be one token ever, it just seems like there needs to be a process to merge chains just as there is the process of forking. There is an argument to be made that "Core" or the Foundations bearing the base token name centralize development resources. In BTC and ETH respectively, only 5 and 2 developers make up the majority of commits. Forking seems to have to have become a way for talented devs to work on protocols. Just look at LTC and @satoshilite.

Additionally, we see that experimentation has been a net positive for society in other areas. Allowing for experimentation and merging isn't limited to blockchains. Just to name a few:
  • Policy experimentation within a federal system of government. I.e. adoption of a precursor to the Affordable Care Act before it became national law.
  • Startups as new entrants that can be acquired or grow to be large companies.
  • Spin offs from large corporations. Standard Oil became several smaller companies and Rockefeller was richer for it.
  • Mitochondria being swallowed to become the powerhouse of the cell.

In blockchain terms, you could conceive of merge mining as extended uncle resolution. In the GHOST Protocol, the individual uncle hash power is added to the winning block's score. Uncle miner still is incentivized, they get some proportion of the block reward. Likewise, people who contribute to the "losing token" are still incentivized. When you think of merging chains, you're still incentivizing a smaller chain's absorption into the larger chain. While protocols can directly implement the necessary hard/soft forks to include the rule set change of a fork, they won't have the now differentiated userbase etc.

How to Do Merges

There are two methods for potentially doing a merge for tokens (and probably more that we haven't thought of).

The first method is pegging a token A to token B.
  1. Agree on a price/exchange rate for A:B
    1. Oracle to determine price
    2. Hash power signaling/ratio
    3. Market pricing on exchanges
  2. Hard fork both protocols to have the same block + rule set
    1. Enforce a specific block height for the rule change, include the pegged price ratio
    2. Price converges
  3. Before the rule set is implemented, people are free to trade out of token B
  4. Allow for atomic cross chain swaps
    1. Using Decred or 0x → hard code this into the rule set change
The second method involves one chain "absorbing" the value of the other. Meaning that token A remains and token B is never used again.
  1. Agree on a price/exchange ratio for A:B
    1. Oracle to determine price
    2. Hash power signaling/ratio
    3. Market pricing on exchanges
  2. Acquire buy out funds for A to purchase B
  3. Post a public address where all B tokens can be sent to
  4. Before the rule set is implemented, people are free to trade out of token B
  5. Burn the B tokens, each B token holder, will get the amount of agreed upon amount of token A proportion to how much they sent to the specified address

Roadblocks to putting this in practice.

Both of these scenarios involve a lot of coordination. Imaging trying to do a protocol merge without some kind of explicit voting mechanism other than hash power signaling induces a headache right away. The future of decentralized governance will definitely play a large part in how these things happen.

Also, as we see in centralized mergers and acquisitions, the larger company often has to purchase the shares of the smaller company at a price premium. We'll have to establish a better pricing mechanism beyond hash power and other matters. Ari Paul and Chris Buniske have been doing a lot of great work in fundamental valuations for this.
Additionally, atomic cross chain swaps are not the only potential way to transfer a token from one chain to another, using a protocol such as Polkadot or Cosmos we might allow for this sort of thing as well.

Real World Procotols That Could Benefit.

These wouldn't just have to be currency tokens, you could potentially also merge utility tokens as well. For example, looking at Sia and Filecoin. If Filecoin were to establish a dominant market cap and share position, it might behoove them to purchase the Sia network. An additional step would need to be taken. Individuals would need to, before they can acquire any of token A, transfer their files over to the new blockchain. Once this is performed, they can claim their Filecoin token.
  • Small cap token mergers
  • Prediction markets (Augur and Gnosis)
  • File storage markets (Filecoin, Sia, and Storj)
  • BTC variant mergers (BTC, LTC, BCC)

----

Look here as well

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1196673 2017-10-07T21:40:02Z 2017-10-26T23:52:17Z Cryptobanking: Brain Dump
I was recently having a conversation with Niraj about the upcoming Raiden release. We were having a bit of a debate about the upcoming Layer 2 protocols for off-chain transactions. The conversation centered around the risks of centralization within these networks, as the graphs are distributed, but rather decentralized. We also noticed that there seemed to be growing competition for crypto assets. Purchasing this token or that token, and with the influx of talent into the space, it seems to guarantee a future where many different things would be competing for your crypto dollars.

Competition for Crypto Dollars
  • Storage -> Filecoin, Sia, Storj
  • Computation -> Golem, Truebit, and other things?? vs something else
  • Bandwidth -> Source, or some other wifi coin.
  • Economic value (staking/tokens) -> Ethereum, you transform it to other tokens, lock it in RanDAO, or stake a Livepeer node.
  • Purchasing new tokens -> NEO, DASH etc.
  • Crypto Hedge Funds -> Prism
  • Layer 2 Protocols -> Lightning Network, Raiden, Plasma, Polkadot.

Since crypto assets are extremely liquid and can be instantaneously changed into some other digital asset, it's tempting to do so. Unless you're a day trader in the top 1%, you'll probably lose money. With all the complexity in dealing with crypto assets, a person's best bet is usually to hold, or rather HODL. In this case, individuals are hoping that the base crypto asset that they purchase appreciates in the future. HODLing is basically stashing your coins under your mattress, which many of my friends have expressed as their dominant investment strategy.

However, if we compare this to a traditional asset like cash, that can earn interest by sitting in a bank, stashing your cryptos under a mattress doesn't seem too enticing. No one's really figured out a way to earn interest by HODLing for this, of course, we’re not the only ones to have this thought.

Lending Right Now

If your intention is to earn interest in crypto, right now you can lend token on Poloniex. When the markets are volatile, you can earn up to 0.1% per day for essentially tapping a button. However, there are two downsides to this. You don’t earn that much money because of the fragmentation of order books. Fragmented order books drive down the liquidity and potentially the demand for your lent crypto assets. Which drives down any interest you could earn. Second, lending (and investing) on centralized exchanges carries high counterparty risk. A centralized exchange could be hacked and your funds stolen, the exchange could shut down leaving your assets locked up on the platform, or the exchange could invest your crypto and lost it.

Instead of a holding my token on a centralized exchange, I'd be interested in depositing my ETH or whatever token in a smart contract that would allow me to earn an interest rate, denominated in whatever token I'd deposited. All depositors can pool their funds together, which can then be lent to another party, with a loan that is administered through a separate contract. Of course, this is exactly what a traditional bank does. Lots of people deposit their money, banks lend it out at interest and split some of the money with depositors.

In crypto, there's already a movement to do this. Mining is effectively an operation that allows you to convert BTC into more BTC. Miners often finance the purchase of their equipment with other things. And by pooling your hardware assets together in a mining pool, you can turn this into less of a lottery and more of a fixed interest annuity on an emerging commodity. Decentralized mining pools potentially allow a decentralized, smart contract implementation of these things. Already we see in mining pools, those that have algorithms to optimize which alts they should be mining on at one specific time. This also carries the additional risk/annoyance of having to exchange your base token that you’d prefer HODLing into another, potentially more volatile currency to generate a return. However, we’re going to continue seeing a proliferation of protocols, and managing this will get unwieldy. 

Even more related to our crypto bank concept is a decentralized mining pool for staked tokens. You don’t have to convert your crypto into dollars to purchase hardware to earn crypto, instead, you give up the time value of your crypto assets for more cryptos in the future, and that’s okay if we’re HODLing. Rocket Pool and 1Protocol are both staked mining pools. Rocket Pool and 1Protocol both implement a token ontop. In Rocket Pool, every ETH deposit generates a one-to-one redeemable token for Ethereum. This harkens back to private currency solutions and other bearer bonds of years past. However, cryptobanks could be used for more than just for staking protocols and letting individuals margin trade.

Furthermore, lending to purchase other tokens is only a slice of activities someone might undertake with a lent crypto asset. Crypto banks could drive liquidity by lending into all these different protocols.
  • Staked mining on Ethereum
    • Things that let you earn money -> RanDAO, Swarm
  • Staked level 2 (Raiden or other things)
  • Delegated staking on protocols that share this design
    • Livepeer.
  • Exchanges to provide liquidity in general, and for margin trading.
  • Operating a 0x node
  • Augur/prediction markets
  • Oracles
  • Numeraire -> (if I really believe my model is superior), then I should go out and purchase token
    • But what if I don’t have assets??? 
  • NEO -> interest for holding token
  • Node operators -> Lightning Network/Truebit/Coinjoin-as-a-service
  • Polkadot/other scalability tokens
History Applied to Crypto

Fractional reserve serves two purposes. One role is operational. Banks make transferring money easy. Instead of moving my gold bar to Timmy, I can ask the bank to change the ledger entry. The other role is economic, providing investments to individuals. If we look at the history of banking in general reserve we went from warehousing money and not lending it to lending it for a fee and only retaining paper slips that allowed you to transfer money from another.

While the operational role of banks are greatly lessened in a crypto economy, as individuals can directly transfer tokens with other individuals using the built-in asymmetric key cryptography. However, individuals may still want to earn interest as they hold, and this is where a cryptobank might come in.

In the far, far future, when on-chain transactions are too expensive (hopefully this never happens), crypto bank contracts/DAOs may enable scaling of transactions. They could operate as layer 2 hubs, with connections to other cryptobanks and then finally to individuals through payment channels. Although we should continue to work on scaling the base protocols, banks could potentially keep the operational role of transferring money (the traditional role of banks). We’ll leave the discussion of whether or not this is a good thing, due to centralization for a separate topic.

A common bitcoin debate is if it needs to be used as a medium of exchange as well as a store of value. The argument in one sentence for bitcoin needing to be a medium of exchange is that if everyone holds bitcoin and never spends it, there will be no transactions, miners won't be incentivized to secure the network and it'll fall apart. However with cryptobanking, bitcoin HODLers who are in it for the long haul are able to lend their bitcoin to others, driving transactions, paying fees to miners, and keeping the bitcoin as a medium of exchange dream alive. 

And that’s about it! Thanks for reading.

----

Related Protocols

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1093477 2017-10-03T08:28:00Z 2019-02-27T05:44:54Z In Defense of Young Founders

Sometime during the summer, a friend of mine questioned if young founders (let's say younger than 26) would be able to develop the biggest startups of the future. The argument was that startups of the future will trend towards hard tech. Technologies like biotech, robotics, AI, and material science each take years to build domain expertise, not to mention capital intensive. Both those form barriers for young founders to get started. Contrast this with the recent history of companies centered in information technology/internet startups. We all have the image of genius hacker developing applications as a teenager. This was (and still is) an open industry, where the tools for development are literally on everyone's desktop. With all that said, it sounds like we have to say goodbye to the garage startup. So are there any reasons for us to be optimistic about the young founder of the future? 

In the past 20 years, there have been many examples of student founders. Michael Dell, Bill Gates, Woz, and Steve Jobs all come to mind. Yet, it's hard to think of examples that stretch outside of this range, but we should fall prey to availability bias. 

A quick survey of Wikipedia shows that in each technological era, young founders have always been able to make a name for themselves. This list is highly biased towards US companies and not comprehensive by any means. However, it's no guarantee that this trend of young founders will continue just because of this past trend--just ask Nassim Taleb. Startups are a uniquely creative pursuit. They sit between, mathematics, a totally abstract pursuit, and history. In "Age and Outstanding Achievement", Simonton examines the age of peak creative/leadership output of different fields. Poetry, pure mathematics, and theoretical physics --which exhibit a peak age in one's late 20s or early 30s -- and novel writing, history, philosophy, medicine, and general scholarship -- exhibit a peak age in one's late 40s or early 50s. I think entrepreneurship skews towards the younger side, but why? Naval Ravikant and Marc Andreessen have already written two great blog posts about this, and I'll quote liberally from them here. 
"The first set comprises problems that are solved by an emotional state (poetry, painting), by loading a very difficult single framework into your head (math, physics, coding), and / or competition (driven by sex drive and time-sensitive). The latter set are more rational, are systems problems rather than point problems, and don’t have time-sensitive competition. " - Naval
Compared to internet startups.
"Modern entrepreneurship, especially web entrepreneurship, is extremely competitive / time sensitive, requires enormous amounts of iteration even withina single product life-cycle, and often requires solving many challenging technicaland business problems one after the other in a public view (with the opposite sex watching). So, it favors the young and single." - Naval
While Naval says that the young founder phenomena may be limited to the modern age, I'm making the generalization using the list built above that entrepreneurship has historically and will for the foreseeable future maintain this youthful skew. Another biological factor that may cause the youthful skew is the difference in peaks of fluid and crystallized intelligence. The young founder's combination of enthusiasm
and peak in fluid intelligence help her with identifying new markets, iterating on products, and more. Yet founders are not alone sufficient to create huge startups. Networks of other talented people, financing, production infrastructure, and the right knowledge also need to be in the mix. 

Although hard tech startups will always require fundamental knowledge to get started to iterate, knowledge is now easier to acquire than ever. Youtube videos, pirated textbooks, Reddit, and StackOverflow are just a few aggregated knowledge bases. Knowing things within a domain is now easy enough, but young entrepreneurs of today also have the advantage of seeing the non-obvious connections between different fields. arXiv and scihub.org have allowed for academic papers to be shared as soon as they are written. It's amazing to watch when implementations of DeepMind's paper is worked on by communities around the globe simultaneously. Usually in one week you can expect to see code from that paper, and in another week that code doing something as interesting as writing episodes of Friends or analyzing the genome.

Sadly not all fields enjoy the low startup costs of software and AI startups. The hard tech startup often needs lab space or large capital commitments to start building prototypes. Not to mention the speed of iteration for AI is probably some factor of 10x faster than biological experimentation or material science, because you don't have to wait for cells to reproduce (or die). Again, new innovations help are on the young founder's side. Infrastructure is now almost as easy to deploy in hard tech as it is for a developer to use AWS making the speed of iteration 10x and cost 10x less.
  • CRISPR -> 10x easier to gene edit anything "“With CRISPR, literally overnight what had been the biggest frustration of my career turned into an undergraduate side project,” says Reed, of Cornell University. “It was incredible.”
  • Desktop gene sequencing -> 10x cheaper and faster to analyze your genome
  • Cloud experimentation platforms -> 10x faster/cheaper way to run and scale. I compiled some other bio related advancements here.
  • AI applied to VR Content Dev -> 10x faster generation of scenery and characters
  • Open Source CS -> 10x more stable and useful software... for free
  • Physics/material science/chemistry/protein folding -> 10x faster experiments with computer simulation (just wait for quantum computers)
  • Bitcoin/cryptocurrency -> 10x better way to incentivize open protocol adoption. 

After a founder uses those basic tools of infrastructure to find an idea that looks like it could be impactful they leverage new funding mechanisms to can scale more quickly. The funding of innovative ideas has long been concentrated in the hands of a few. Governments once reigned supreme in funding things, as we became wealthier this trickled down to wealthy individuals, then to professional risk investors, and now to individuals in the form of crowd sales, Kickstarters, and most recently app-coin sales. If you accept the idea no one can judge innovation at the earliest of stages--that VCs and angels are using basic heuristics to cull bad startups as opposed to picking winners--then new funding mechanisms can. Free flow of capital through crowdfunding, more diversified risk at the seed stage benefits allows for more companies to get created. 

The internet and associated products should help entrepreneurship in general. If history is any guide, these types of innovation should help those out at the edge the most--today's young founders and others that are resource poor. More young founders can start hard tech companies of the future as the speed of iteration, cost of starting, and intellectual capital get easier to access. The more abstract tools get, the more quickly we can go from insight in mind to project in hand. I for one, am excited about this future.


TLDR

Young founders will win because:
  1. the nature of innovation in has always skewed young
  2. and will the composition of entrepreneurship stay the same change (more geared towards fluid and less towards crystallized)
  3. the inputs of entrepreneurship are increasingly getting easy for young entrepreneurs to access ie: knowledge.
  4. the tools of development and capital are easier for anyone to acquire
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1116359 2016-12-18T13:28:55Z 2016-12-18T19:31:52Z Some Work to be Done

Here are a few things that we don't know. If you are working on any of them, I'd be curious to learn more.

  • What consciousness is and feels like to other entities that are not ourselves
  • How the components of our brains work together to remember, generalize, learn, and act
  • How to safely augment our mental capabilities
  • How to create machines that can remember, generalize, learn, and act
    • How we can do this safely
  • How to create robots that can generalize and can act in a home
  • How to best help those suffering from mental illnesses
  • How to safely create powerful electronics that can remain inside the body indefinitely
  • How to reverse dementia, CBT, Alzheimer's, and more
  • How to reverse cancer
  • How to reverse heart disease
  • Why we keep getting fat and how to stop it
  • The best way to reverse diabetes (and other metabolic disorders)
  • The root causes of aging
    • How to measure biological age
  • How to model the large-scale systems of biology
  • How to cheaply mass produce DNA
  • We don’t know how to rearrange biological components to do useful things in a safe manner
    • We don’t know how what individual biological components do at various times
  • A cheap, safe, precise way of delivering drugs or other biologics
  • A safer, cheaper way of developing treatments
  • The best way of keeping good monopolies (those that create lots of consumer surplus early in their lifetime) and turn into rent-seeking organizations later in their lifetime
  • How to accurately model large-scale social systems 
  • We don’t know the best way to organize and make decision at a scale of 7+ billion people
  • The best way of solving the tragedy of the commons
    • Climate change
  • How to organize large-scale groups of people, capital, and knowledge in systems to create value
  • How to correctly allocate resources to those doing research
  • The best method of learning, an inefficient and difficult process and sometime unenjoyable process even for the smart and self-motivated
  • The best way to motivate people to stay happy, work on interesting problems, and to contribute to society
  • How to scale-up self-sustaining fusion reactions
  • How to scale up generalized quantum computation
  • What lies beyond our universe
  • What the fundamental nature of our reality, at the lowest levels
  • If the universe, at the lowest levels is continuous or discrete
  • How to get a lot of people off the planet onto another planet safely
  • How to get minerals and other resources from off the planet onto the planet
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1109655 2016-11-21T19:47:25Z 2017-06-26T23:15:48Z The Great Firewall of Facebook

Media when concentrated in a few individuals or the state has always been subject to censorship/influence whether by direct action or inaction. Western Union, China, and Napoleon are a few prominent examples. Now we have Facebook. The press lauded Facebook and Twitter when it influenced the Arab Spring, but is chafing at social media’s power now that it’s come to influence our politics in the States. 

Facebook and any social network has de-facto censorship through their use tweaking of newsfeed algorithms. Facebook is just the largest and easiest target. Of course unlike China, Facebook’s aim is not to achieve certain any certain political goal. The aim as a public company is to create long-term shareholder and user value. Therefore, any tailoring of the newsfeed algorithm will be made towards those ends. It is a bit scary that Zuck controls the majority of Facebook’s voting stock, control of the board, and further downstream of the algorithms that control our news feeds, and sheer scale of Facebook’s users. And when shareholder value seemingly comes into conflict with our user value in the most valuable walled garden in the world has a frightening influence when a country is just a medium sized audience in terms of their scale. Facebook has always been a walled garden, killing off any products (Facebook Platform, access to media, etc) that captured too much value from the all important newsfeed. Much like Facebook, the internet in China is also a walled garden. With the Chinese internet, we know the aims of censorships are to achieve political goals. 

As it’s started influence our domestic politics, there have been proposals in the tech community, within Facebook, and in the broader community to tweak the Facebook algorithm in various ways. These have been proposed solutions that I don't feel like are permanent fixes:
  1. Users proactively change feed, because people don’t shift from their default option.
  2. Tweaking the algorithm. We’ve seen with SEO and Google, this is just an arms race.
  3. We can use traditional anti-trust regulation. 
    • Iron rules of information economies, everything tends towards monopoly because of network effects and zero marginal cost of distribution
    • Smaller groups of people might become more of an echo chamber
  4. We could turn Facebook (and Twitter) into public utilities/non-profit
    • This returns to issue of who controls it. If it the government, this would always be at risk of turning into a propaganda machine.
    • If it another rich billionaire, it runs the same issues as traditional media organizations (as well as Facebook).
The core problem with Facebook, Twitter, Linkedin, is that they need to aggregate their data and users. That’s why they’ve closed off their APIs to developers. They know their graph is what makes them special. Twitter used to be open with their data, but closed it after they saw different clients like Tweetbot/Tweetdeck and like the like potential a threat to their data moat. They can’t relieve the tyranny of the algorithm

Yes, IMAP and access to emails is one example where multiple parties have access to data, and where companies can still make money.

However, the blockchain and app coins might provide a better solution to ending censorship by algorithm and still incentivizing people to create open products.. Just as we have in email, we can use multiple clients to look at our email, and as well as incentivize creators of these protocols.



Facebook Email Blockchain Social Apps
Front End One (Money made up here with ads) One (Money made up here with ads) Many
Algos One Many Many
Token/access to data None None Money made down here with increasing data/users
Data/Blockchain Have to guard this Free to share Share freely/forkable

We can tie together a token that directly read/write access to data. The token should rise in value just as Bitcoin has risen in value as more transactions, more data is added to the ledger. When we tie the business model of the token directly to data, we don't have the same problem of not allowing Facebook to share their data. 

Just as we can view our email with multiple clients, we'd be able to view our friendship graph, the stories and links they post with multiple front ends. On the front-end, it doesn’t matter which UI/UX experience the user sees, or which algorithm the user sees*. Different entities can A/B algorithms for sorting newsfeeds. We can have ones that allow users to see fake news, ones that allow users to be exposed to more long-form content, or even ones that promote argument. Cryptocurrency based social networks can end the de facto censorship that Facebook holds over what news a user sees. And some people have already built prototypes of these social networks: Squeek.io and Eth-Tweet. To me, this seems like a potential solution that aligns everyone’s incentives. It’s a way for technology to solve problems created by technology.

Facebook represents a centralized model of social networks. They’ll still remain very important. While a great utility, it also runs counter to the spirit of the open web. Perhaps blockchain social networks can return us to the open-source past of the web, while still allowing creators to satisfy their self interest.

* Gating access to the underlying data doesn’t have to mean that the average user will have to pay access to use the service as different token distribution mechanisms can be used so that top users (which will be advertisers or celebrities) will subsidize access for the average user.  
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1097679 2016-10-20T18:08:00Z 2019-02-27T05:46:10Z Learning from Advice

College is a pretty stressful and uncertain time. And if you’ve heard anything about how cut-throat an environment Penn is, then you know how much people worry about their futures. Am I freaking out a bit? Are my friends freaking out a bit? Yea. But this essay isn’t about commiserating that experience directly. It’s about what we do when we face these uncertain times. Usually, we look for answers on Google, in churches, older peers, parents, and even fortune cookies. We look to anyone and anything that might have pertinent advice. Yet for all the so-called advice we get, why doesn’t much of it seem to stick?

When I ask for advice, I'm in a moment of semi-crisis or with someone who I think knows more than me. Hearing a piece of seemingly insightful advice is excellent instant gratification. Regardless of the quality of advice, anything that sounds remotely confirmatory of my planned direction gives my brain a small dose of dopamine. If I have a notebook handy, I might even write it down. But after a while I forget that piece of advice and move on with my life just as before. Until, of course, I invariably do the same thing all over again—ask, feel good, forget. When I ask for advice, I actually do want to improve my live trajectory. So that’s not that a very helpful cycle.

Advice is something that’s been gained through years of experience, and that’s how it’s should be applied. Advice is a little kernel that we are supposed to carry and ingrain in our minds, a habit or behavioral change that we should make. Yet in the moment of asking for advice, in our minds, we do something that’s called substitution. Because the question of how we will feel in mid-to-long term future is so cognitively hard (impossible?) to forecast, we substitute that hard question with the easy question of how we feel the moment after we receive a piece of advice.

Knowing that we have to be aware not to continue a cycle of taking advice is a good start, but it’s also pretty general. In the end, taking advice and acting upon it is about building new habits. Since there’s already mountains of literature out there about building habits, I won’t go into it. Deciding to take advice and change a habit is also a step past where advice can trip us up. Bad advice implemented well can lead us down an American Beauty-style midlife crisis. 

In general, advice from books is probably better than from a person. Books older than 50 years old and ones with ancient wisdom are extremely helpful. They give time-tested advice with the intuition/experience behind it. And when moving on to seeking wisdom from people, make sure to keep track of how much that person knows their domain before taking their advice. When someone is older or seems to have more esteem, it's hard not to get sucked in by their halo. If the advice is delivered confidently, it's harder to discount the advice even though it's often guess work from their end. Even if they do somewhat know what they are doing, be wary of how they came across their advice. The environment in which advice is sought and experience earned matters. Wicked environments are those where individuals can learn the wrong lesson from participating. A broad example of this is the 2001 dot-com crash. A whole swath of people learned from this time that tech is a bad investment, something that we can see now is not true. This applies to investing or any highly random, low feedback environment--(finding a soulmate, landing a dream job??). 

Take advice from people whose shoes people want to be in. The future is indeterminate. In ten years, I could see myself as a startup entrepreneur, a VC, or even doing something in public policy. Therefore the cross-section of people that I'd seek advice career advice from is large. After asking and compiling advice from multiple sources, I try to discern the experience behind the advice, look for ways in which the advice breaks, make sure the incentives of people dispensing advice align with mine, and not ask for more advice before changing my own behavior. And in the case that their advice conflicts, as it often will, I will just go with my gut. I do this because I know that it probably either I'm asking for the wrong advice or that the decision point of the advice leads is inconsequential, or both. Not overanalyzing the situation can be tough when deciding whether or not to drop out of school. In the case that their own actions conflict with their own advice, it matters even less what course of action we take. Advice is just a data point as every situation is different. Being able to live with your decision is what's most important in the end (Thanks Demps).

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/1014491 2016-03-17T05:48:00Z 2019-02-27T05:46:41Z Free Internet and Electricity (And Crypto) Everywhere

Before finals last year, I traveled to Belize to escape school. I felt the full force of the 100% humidity and the sun beating down on our backs at a scalding 97 degrees. Trouble began to brew as our car rental fell through. It wasn't turning out to be the relaxing getaway we thought it'd be. Luckily, we got a car from Pauncho's, a local car rental service, at double the normal insurance premium. We soon pulled away the airport, and set our sites on a long drive. 

Belize is undeniably beautiful. Glancing up from the road, I caught glimpses of lush greenery and huge mountains in the distance. And later in the trip, we spent time in a rainforest tree house, surrounded by the all the coos and croaks from all sides. However, this beauty was juxtaposed by the conditions of the towns we visited. I saw weather-worn houses and one-room schools deprived of access to internet. On the trip, we paid a huge premium for this privilege: $70 for a hotspot and 2GB of data. This was a luxury that many of the people I was surrounded by wouldn't be able to acquire. While meditating on that, I caught up with the connected world. 

I read about how solar energy was spreading around the developing world due to low-cost Chinese panels and about the new release of the 21 Bitcoin Computer. The "21" press release had a quote that stuck with me--"a miner in every chip and device". Sometime while reading this article, a flash of inspiration hit. I envisioned an integrated system to give access to the internet and electricity for free--a solar panel, embedded cryptocurrency miner, battery, and Wifi/3G access point. We would give the device and internet services away for free and earn money by mining cryptocurrency with free solar-generated electricity.

While we have 5 billion phones on the planet, developing nations around the world not only pay the highest costs per capita for smartphone usage but also for merely powering those phones. We know that the smartphone is everyone's gateway to the internet. However, the internet that you and I use at home is not what those in the developing world use. Phones are often unable to update their firmware because the cost of that download alone would eat up an entire month of data. Data plans can cost as much as 37% of a worker's salary per month in the developing world, and in rural areas, this is even more stark. These areas often don't have access to cellular service at all. I know this not only from months living in my ancestral farm town in China but also from this recent experience in Belize.

I recently ran a back of the envelope model to test the feasibility of this design. Thanks to increasing solar panel efficiency, decreasing hardware costs, cheap computing power, new 4G/LTE/Wifi satellites, and Bitcoin, the numbers seem to work. We could potentially give everyone in the world access to today's essential utilities--free internet, electricity, and access to a global financial system. Who knows if this idea will end up working, but the potential seems pretty great :) If anyone has any info to invalidate this idea, please do so; in the meantime, I'll be learning more about the crypto price dynamics, satellite internet, and reliability of hotspots. Then moving on to building a prototype!

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/996587 2016-02-21T00:40:00Z 2019-02-27T05:46:57Z Biology in the Coming Years

If I had to compare the development of the synthetic biology/biotech stack to that of the computer, I would say we’re still pretty early. In biology, we’re in the big mainframe era, before the development of the transistor and integrated circuit.


Here's my thinking:


Biology Today Mainframe Era
Long Dev. Cycle Times/Sharing resources Waiting for western blots and gels to run… Waiting for cultures to grow. Few hours to a few days. Trying to get mainframe time to run programs. Few hours to a few days.
Low Debugging No idea if an organism works until actually produced (no in silico modeling) Punch Cards!!! and No compiler
Low reusability/reliability of parts Genes often don’t work outside of their original organism Vaccuum tubes get moths stuck in them
Fragmented community Limited hackers, mostly stuck within universities limited hackers, mostly stuck within universities
Low Abstraction Individual Gene Sequences Punch Cards/Machine Code
Low Complexity of Programs

Today: Yeast that makes beer and a scent

Future: Designer cows??
Then: Computing missile trajectories

Today: Google

And moreover, right now, Ph.D. students and Undergrads are oftentimes just manual labor.

  • Compare:  to 
These student while credentialed as ever don’t touch the interesting problems like experimental design, have much of a say in what projects they work on. I can personally attest to this. For the few short months that I worked in a cancer lab, I was bored to tears. I spent the first week excited from learning to perform different protocols. The next few months were spent being bored to tears. Day in and day out, all I did was move a small amount of liquid from point A to point B. The automation of labor will bring huge headwinds.

It’s not all bad news. Just as the mainframe era evolved into the computer revolution, the bench-work era in biology will give way to a cloud-based, automated version of biology. This is great news for the general public and a great business opportunity. Here are the startups that are bringing a CS approach to biology.

Here’s a link to the Google Sheet with all the companies.

  • The “App” Layer” -> Machine learning applied to discovery: These companies are using large data sets and deep learning techniques to make biological products to sell.
    • Existing drugs: Mine drug databases to find new combinations that will work for treatment on different diseases. This is a huge growth area and makes a lot of sense for a deep learning company firm to enter the market. Since drugs combinations don't have to go through Stage 3 Clinical Trials again, and only have to prove that the drug combination is safe, this can give a capital efficient method to producing cures.
    • Molecular: Companies that are making small molecules to treat disease. Atomwise is the most successful company in this space. This also seems like a type of data that deep learning techniques are able to represent more easily than the complex biological circuits. http://arxiv.org/pdf/1510.02855.pdf
    • Genomics/Biologics: These companies are using ML/DL techniques to create useful DNA Sequences and Antibodies. 
    • Organisms: These companies create functional microbes that do different things. End users buy products that these microbes produce--fragrances for perfumes, oil, and therapeutics. Although these companies might use machine learning, this process is more about trial and error and iterative design, compared to the more automated process of small-drug discovery.
  • The “Backend” -> The "Biological Data Analysis Software: Companies here either sell analysis software or offer specific recommendations based on their proprietary algorithms to clinicians, end consumers, or researchers. I’m not sure who will win in this space, as I don’t think it’s clear that having large datasets are very defensible. I think this mostly because the cost of data acquisition is decaying exponentially. I think this may be a reverse situation to consumer internet companies. Where data is easy to get, but the algorithms are the important things. See Craig Venter’s attempt at monetizing the very first full human genome sequencing that failed. Is the timing right, now? 
    • -Omics: Besides our genes, there is RNA, small molecules (like lipids), proteins that make up our cells, and their own “-omics” which respectively are transcriptomics, metabolomics, proteomics (and don’t forget the microbiome. HLI and iCarbonX are the two largest companies trying to make sense all this stuff.
    • Genomics: Genetic analysis software that goes to researchers and clinicians that help drive better decisions.
    • Consumer: Recommendations are given to end consumers. It’s interesting to see that a large consumer player is transitioning from making money on selling tests/data to developing drugs. Will other players follow?
    • Imaging and Misc: More biological data such as image data, ultrasound, or public health. There’s a lot of interesting things that can happen here. Using MRI data to help doctors diagnose PTSD and other neurological conditions is one big thing that comes to mind.
  • Protocol Layer -> Distribution of existing datasets: These companies provide what data there is, how to share data, and how to compute on data.
    • -Omics: Public organizations provide data sets. Companies like Google Cloud Platform allow you to store large data sets and analyze them to a certain extent.
    • Genetic Variation: Companies here are able mapping out the variation within genes.
    • Circuits: These companies build off the popular iGEMs competition and the synthetic bio movement to provide a reusable set of genes to build with. These are usually free to the public, however, organism discovery companies usually have proprietary gene and circuits that they use.
  • The Internet -> Collaboration Software for People: These are more traditional software products—content platforms, data sharing, and design tools.
    • Literature and the Research Network: There are many attempts at making journal articles easy to find and researchers more accessible.
    • Protocols: These are attempts to make biology more reproducible through the creation of standardized languages to describe experiments in discrete, repeatable steps.
    • Gene Design Tools: The IDEs for biology. Software here is trying to make genes and organisms easy to build with WYSIWYG and visual interfaces. A lot of these products are put out by DNA synthesis companies that want to make the designs scientists produce… for a profit.
  • Creating a Functioning Lab: Funding and bench work are broken. Moving towards a fully automated lab.
    • Funding/Equity Models: Everyone knows that basic research funding is broken. Both the number and average size of grants is decreasing. There are many crowdfunding competitors here. There’s an interesting attempt at creating “equity” with the blockchain.
    • Machine Automation in the Lab: Companies here are looking at the hardware in the lab. Different approaches include an Uber for Lab Experiments, an AWS for experiments, and creating remote access for your own lab.
    • Automating Assays: Taking care of the mixing and matching of assays/reactions within a lab.
    • Lab Management Software: Traditional software that is trying to get a lab functioning better.


My initial thoughts on investment themes:
  • The AWS for lab automation as well as computation will be huge. Automation frees up more than man hours, the lower cost of science will allow scientists to conduct ever more research. Biology has historically been a pretty good adopter of computer techniques to model/simulate/discover organisms. However, historically all three things necessary for machine learning—data, computational capacity, and the algorithms haven’t been able to handle modeling of biological systems. All three areas are now changing. In the past, 1 petaflop would have cost infinite money, now this only costs $400 dollars on AWS. By 2020, we’ll be producing more genomic data than is uploaded to Youtube. All this data will need to be stored safely and computed on. Deep learning in discovery is only going to become more interesting as those algorithms continue to develop.
  • Continuing machine learning’s march into basic research/medicine. There are lots of attempts at making sure research is read, and that people can collaborate, but is that the right approach? Even now, there's not enough time for a biologist to stay on top of current literature. Although early, there are attempts extracting structured data from literature and pushing them through Watson to synthesize finding. After synthesis, researchers or clinicians can use data to create new experiments/make more informed decisions. This will only quicken as adoption of a high-level language used to describe experiments that are machine readable spreads.
  • How to share data is an open problem: There haven’t been many businesses that are trying to build large scale open sharing of genetic info/data sets. Although both HLI and iCarbonX endeavor to aggregate huge data sets to (in the long term) create medicines that extend human lifespan, their short term plan is to sell sequenced consumer data to drug companies thru B2B licensing agreements. This places the valuable data outside the hands of smaller researchers and gives patient data to large companies. I’d be interested in seeing how bitcoin (and especially 21) play into the development of open sharing in biology. With projects like https://github.com/joepickrell/genome-server-21 and https://github.com/joepickrell/phenopredict21 happening, bitcoin shows it's flexibility. Although this was a proof of concept, I think analysis of data, has the potential to put personal health data sharing in the hands of the people rather than doctors and companies.
  • Developing direct relationships between patients and drug companies. Many companies are taking a very new model for finding patients. These companies are directly developing relationships with patients/users of their drugs. Instead of partnering with hospitals and large health care networks to find study candidates, they can do so with a lower cost of capital with the internet. 23andMe is a shining example.
  • Bio is becoming a lot cheaper.  Look at the Perlstein Lab. They're able to do drug and mouse studies on software startup run rates.

Work being done by these companies to bring biology up to software speed is incredible. But what does it really mean for end consumers? What kind of products will we see? Here are my predictions for what we'll see by the end of 2020:

All errors are my own. Thanks for reading! 
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/975723 2016-01-23T20:40:00Z 2019-02-27T05:47:12Z DisneyWorld and Tech Habits

I had the good fortune of spending a lot of time with my extended family this past holiday season. A group of twelve with ages ranging from four to eighty-plus were shuttled down to DisneyWorld. 

"Are we having fun yet?"


It was endearing to see my youngest cousin's eyes light up as we explored the amusement park in between her bouts of crying. However, my next youngest cousin, age thirteen, did not share this same sense of wonderment. Instead, he was obsessed with maximizing the number of likes on his Instagram photos. The eldest among us, the young Baby Boomers were also stuck on their phones browsing WeChat. Although the samples sizes are small, each generation had a different relationship to their phones, but used their phone no less than any other group.

Generation Z kids were virtually born with their smart phones in their hands. They think Facebook is too confusing, but as they enter HS, they'll be forced to use it. Sorry kids. Facebook is the New Linkedin (which is the New Email). After getting off of Aladdin's Magic Carpet ride, we went to go cool off by getting Dole Whip, pineapple flavored ice cream. As soon we got the Dole Whip into our hands, my twelve year old cousin was taking pictures to post to Instagram. He continued to edit, filter, and post Instagram photos ASAP. I quizzed him on his strategies to garner more likes on Instagram and he talked about how specific times during the day were better and worse, how he had multiple accounts to drive traffic (read: SPAM), how he'd use Instagram Direct to organize group chats, and would add hashtags on hashtags on each photo. While older folks might share that they ate Dole Whip in casual conversation around the water cooler, he wanted to share in real-time. Just goes to show that internet really is everywhere.

Millennials grew up on a desktop computer. We might be able to put their phones down while waiting in line, but probably not. We talk mostly with friends through groupchats and Snapchat. On my own phone, I kept up with college friends in several different GroupMe's. Sometimes simultaneously sending chats back and forth with the same friends in different GroupMe Groups. To a certain extent, we're caught in the middle conscious of when we use our phones, but still trying to share things on Snapchat in the moment. We browse Facebook as a last resort and mostly while at home. We're the only ones who think it's a good idea to carry around a DSLR, the other groups stick to using their phones. We're still trying to outgrow our hipster phase.

(Yung) Baby Boomers. These folks came to internet and mobile phone late in their lives, and as a result of that unfortunate occurrence, their thumbs aren't as fully developed as the younger generations. Because of that, Baby Boomers are forced to poke at their screens with their pointer fingers. Although this trait makes me laugh, it is actually an advantage while browsing their app of choice, WeChat. WeChat employs a heavy text interface, with several layers of menus and lists that need to be carefully navigated to post the pictures and chat in group chats. A fat thumb is just not up to this task. These Baby Boomers also favor voice conversations when trying to make the smallest of smallest of small talk. They treat their text messages as an email inbox, by allowing unread messages to pile up. While I'd be compelled to tap at each blue dot, my mom has no problem letting hundreds of messages go unopened.

While these groups may have the same apps downloaded, their habits across apps greatly varies. The metaphors they bring from their previous experiences with tech inform how they'll use their phones. For me, the best moments of our trip were times when we put our phones down phones and share cringeworthy family jokes.

---

Thanks to Josh Lee for reading a draft of this.

]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/967100 2016-01-10T00:26:00Z 2019-02-27T05:47:22Z The Startup Game

A few weeks ago, the Guesstimate beta came out. It's pretty cool; it’s like Excel with Crystal Ball built right in. You can input a single number or a range of values and build models with it. Guesstimate’s release and the holiday season gave me the perfect chance to explore an idea on the startup industry. I had been meaning to building a model to understand the formation and development of a startup to its eventual failure or exit.

This is one in of a long line of attempts to try and quantify an often-times opaque industry. Two prominent examples of data-driven approaches to venture financing are Aileen Lee’s TechCrunch article that popularized the term ‘unicorn’ and a recent Cambridge Associates research report on venture returns becoming less concentrated. While both of these reports are good attempts to understand an aspect of the startup formation and funding, it’s often hard to understand how a startup in moved along through this process if you are new to the industry.

 
The startup industry model in Guesstimate takes inspiration from Sam Gerstenzang's Open Source Venture Model and Bryan Johnson’s OSF Playbook. Like any model, my startup model is an attempt to make explicit assumptions and beliefs about the world to be tested. It allows you to change values to see how each element can push and pull on each other. It follows one cohort of companies started in a year and follows them through their life cycle. It assumes a set amount of capital available at each stage that is always spent on financing that set of companies. You can play with the model here. Right now, you can make changes the model on Guesstimate, but no changes are saved once you leave the page. Varying the “exit multiple” and the number of deals participated in by VCs have the most dramatic effect on the model.

Some key things learned and reinforced in the course of building the model:
  • There’s a huge amount of disagreement in just how many startups are started every year. The Kauffman Foundation says that ~6,000,000 new businesses are created, while not stating how many are high growth startups. Marc Andreessen says there are 4,000 startups that are created. In addition, people still don’t agree on the definition of a startup.
  • It’s really hard to build startup. So, so many fail. The vast majority of new businesses fail to attract any angel or VC funds at all.
  • Power law distributions are still not internalized by people (and not well represented by this model). The magnitude and difference of returns that one company can generate is just astounding. WhatsApp raised a total of $60 million while exiting at a total valuation of $19 billion, a 316x return on invested capital. 50% of startups will fail to return anything, and the next 40% of startups above that will hopefully return the the total invested capital of investors. It is the WhatsApps of the world, the top 1% that bring home meaningful returns.
  • Angel Investors make up a huge not-as-often-recognized pool of capital to startups. $20 billion is invested per year by Angels into startups. Their importance is hard to overstate at the earliest stages where they enter 50 to 70k deals per year. This prominence has grown since the 2000s due to the low cost of doing a startup provided by AWS and other related services. Since the costs of starting a software startup have dropped so low, VCs aren’t able to deploy such little capital in one deal. Their model does not work like that. Angel Investors, do in fact generate a nice return, in line with VC returns. 
As previously said, Guesstimate has only two distributions normal and uniform distribution and isn’t able to capture much of the statistical reality of startups. While normal distributions may be a good way to model the likelihood of a startup moving onto the next stage of funding, it’s not a very good way to measure the return generated at exit. Right now, the model is merely descriptive (and barely so). In the future, I’d like to move towards a prescriptive model to answer the question: “how can we change the current system to create more impactful innovation in the world?”. Questions such as "Do we need a more diverse group of VCs to allocate capital to different startups?" or  "Is the most effective way to create innovation to pump more money towards VCs or to lower the cost of starting startups?" may be more easily answered with this model.

With that said, here are a few directions I’d like to explore:
  • Exploring how broader macroeconomic trends influence the startup industry. At the midpoint of 2015, China was on pace to invest $30 billion through venture capital. How will 2016 China influence funding this year, and how will these impact the startup ecosystem 5 - 10 years down the line? (Thanks Daniel)
  • How the industry (and cost of doing a startup) affects the rate of formation. While we’ve seen a veritable boom in the formation of software startups, the same can’t be said for life science startups, where the number of initial financings by VCs has remained unchanged. As the cost of doing startups comes down, we should see a pattern of more hardware and biology startups being funded at the early stages. PCH International and Transcriptic are working to do their part to lower costs in their respective industries.
  • Making this model more of a simulation to see how the ecosystem evolves over time. I would like to see how exits by the large companies are able to seed the next generation of angel investors and provide landing grounds for acquisitions. Silicon Valley wasn't built overnight. The dynamic process of companies exiting and investors passing on advice to the next generation is an important to creating huge companies and innovative ecosystems.
  • Add more data! I’d like to see how individual firms, investors, and entrepreneurs are able to influence the growth of a startup instead of aggregated statistics provided by reports.
Thanks for reading! Drop a note on Twitter if you found this interesting!

Here are a list of sources.

Thanks to Daniel Kao, Jonathan Zong, and Reed Rosenbluth for reading a draft.
]]>
Dillon Chen
tag:dillchen.posthaven.com,2013:Post/948026 2015-12-13T20:57:00Z 2019-02-27T05:47:33Z Observations on Company Culture

I recently visited around fifteen companies in SF — small startups just past series A to 20-year old internet companies — without dropping any names, here are some observations.

Authentic belief in a company’s mission — that one’s work is actually important — is different than the normal lip service that companies pay when talking about “changing the world”. Culture isn’t just letting dogs in your office, or nice couches, or wearing Hawaiian shirts. You can literally smell the culture. It’s in the air, written on people’s faces, in how they speak and act. It’s imbued from the top-down through founding stories and values. As well as also from the bottom-up from the interactions between co-workers and visitors . Everyone’s attitude influenced the overall culture, positively or negatively. We all know that communication is 85% percent body language — culture is communicated non-verbally as well.

It’s seems very, very hard to keep missionary cultures as companies grow. Finding engineers is hard enough, but finding engineers is harder still when they need to believe in the mission. Finding engineers is tripley hard when a company is also quadrupling in size. Everywhere we went had smart people, that was clear. However, challenging them to do great work and getting them to believe is hard. The “craziness” of the mission (not a scientific measure) seemed directly correlated with the quality of people.

A focus on metrics and product direction lent a sense of urgency to everyday activity. We visited a company where in the center of the office, the hockey stick was prominently featured. It’s a visual reminder of where the company is, where the company has been, and how the company is doing. Without a view of the metrics, they could kid themselves into believing that they were doing well. There was a huge difference between the companies talked a big game of growth and those that could actually show outsiders their growth.

With all that said, here are a couple of my suggested ingredients for what makes a great culture: founder myths — the trials and tribulations of what the founders had to do to create change in the world (i.e. hero’s journey), missionary people — people who believe they are doing something for others, heaps of trust, a focus towards continual improvement, and luck.

Getting the culture right seems really, really hard, but seems vital to getting real work done.

]]>
Dillon Chen