Links: 12/11/17

Beau Lyddon
Real Kinetic Blog
Published in
11 min readDec 11, 2017

--

The future is here — AlphaZero learns chess

Imagine this: you tell a computer system how the pieces move — nothing more. Then you tell it to learn to play the game. And a day later — yes, just 24 hours — it has figured it out to the level that beats the strongest programs in the world convincingly! DeepMind, the company that recently created the strongest Go program in the world, turned its attention to chess, and came up with this spectacular result.

On December 5 the DeepMind group published a new paper at the site of Cornell University called “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm”, and the results were nothing short of staggering. AlphaZero had done more than just master the game, it had attained new heights in ways considered inconceivable. The test is in the pudding of course, so before going into some of the fascinating nitty-gritty details, let’s cut to the chase. It played a match against the latest and greatest version of Stockfish, and won by an incredible score of 64 : 36, and not only that, AlphaZero had zero losses (28 wins and 72 draws)!

Stockfish needs no introduction to ChessBase readers, but it’s worth noting that the program was on a computer that was running nearly 900 times faster! Indeed, AlphaZero was calculating roughly 80 thousand positions per second, while Stockfish, running on a PC with 64 threads (likely a 32-core machine) was running at 70 million positions per second. To better understand how big a deficit that is, if another version of Stockfish were to run 900 times slower, this would be equivalent to roughly 8 moves less deep. How is this possible?

  • Awesome that they’ve already applied this to a second game.
  • The ability to “learn” off the set of rules vs just the data is pretty awesome and a step towards getting us closer to the flexibility necessary to have true learning.
  • As exciting as this is we’re still a ways off from the super or general AI that most people think of. The reason they use these games is they come with a clear set of rules (boundaries) and of course the real world has limited scenarios that have clear, concise boundaries. That said there are scenarios that fit that and it will be intriguing to see what ones they go after next and what they’re able to do. Either way these are all the steps that need to be taken to get us to general AI.

The Case for Learned Index Structures

Indexes are models: a B-Tree-Index can be seen as a model to map a key to the position of a record within a sorted array, a Hash-Index as a model to map a key to a position of a record within an unsorted array, and a BitMap-Index as a model to indicate if a data record exists or not. In this exploratory research paper, we start from this premise and posit that all existing index structures can be replaced with other types of models, including deep-learning models, which we term learned indexes. The key idea is that a model can learn the sort order or structure of lookup keys and use this signal to effectively predict the position or existence of records. We theoretically analyze under which conditions learned indexes outperform traditional index structures and describe the main challenges in designing learned index structures. Our initial results show, that by using neural nets we are able to outperform cache-optimized B-Trees by up to 70% in speed while saving an order-of-magnitude in memory over several real-world data sets. More importantly though, we believe that the idea of replacing core components of a data management system through learned models has far reaching implications for future systems designs and that this work just provides a glimpse of what might be possible.

  • Jeff Dean and others at Google release another paper. So many of the previous papers released by Jeff and his gang have had significant impact on the tech industry (whether you realize it or not).

Finding bugs in Haskell code by proving it

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

  • Some day I hope to verify my code with proofs. I have a ways to go with these languages before I’ll feel even remotely comfortable though.
  • That said I have done a good chunk of the Idris Book (Type Driven Development with Idris) and it’s pretty damn neat what you can do. Even the simple things like defining limits on lists is quite helpful.

What the “Rule of 40” Means at the Early Stage

Over the past few years there have been a flurry of articles about the “Rule of 40”, which describes a tradeoff between growth and profitability for software companies:

Growth rate plus profit should be greater than or equal to 40%.

CEOs and leadership teams have to steer many directions (go upmarket or downmarket; when to enter adjacent markets, etc.) but one of the fundamental questions is when to accelerate investments in growth, when to “stay the course” and when to tap the brakes. I’d argue the trope about Rule of 40 is misleading and actually harms venture-backed companies. Instead, a simple, metrics-driven analysis can provide a clearer framework for the decision.

As an investor, I wholeheartedly embrace the idea that companies should manage with discipline and put themselves on a path to profitability. Being funded by customers rather than investors lets companies control their own destiny. Profit and growth matter and the rule does explain 30%-50% of the valuation of public software companies according to several analyses. However, it doesn’t explain the other 50%-70% where growth and profitability aren’t of equal value and it is incredibly hard to achieve before companies reach significant scale (often $200m).

  • There are no true golden rules.

Early stage companies are better served by doing detailed analyses of their unit economics: customer acquisition cost (by customer type and/or channel), gross margin adjusted Lifetime Value (LTV), churn rate, and net renewal rate.

  • Yep. Pretty much always.

To be clear, eventual profitability does matter. Companies should have a strategic plan and long term financial model where the lines converge as they achieve scale. Especially in an early stage start up, it may suffice to have a well thought out theory of the business that achieves profitability as gross margins improve, as customer acquisition channels get more developed, as data accumulates and machine learning models get trained. But you can’t lose money on every customer and make it up in volume.

  • That last line should be repeated over and over.

SoftBank’s $450 million investment puts Compass on the global map

Compass, a New York-based real estate startup, said Thursday that it had raised $450 million in funding from SoftBank Group’s Vision Fund. The deal values Compass, which has now raised $775 million in total, at $2.2 billion.

Cofounded in 2012 by CEO Robert Reffkin and executive chairman Ori Allon, Compass has quickly established itself in top-tier markets like New York by wooing top agents. Its technology platform helps agents price and market properties by automating workflows and giving them access to data insights. After signing on with Compass, agents see a 25% increase in commission income in their first year, the company says.

Compass, which is currently active in 11 U.S. markets after launching in Chicago last week, plans to use the funds to double its domestic reach over the next year. Eventually, the company plans to enter markets overseas.

“We’ve been doubling every year in terms of revenue, in terms of transactions. Now we want to take it up a notch,” Allon tells Fast Company.

Today, half of Compass markets are profitable, with the most established leading the way. Allon expects revenues to surpass $800 million in 2018.

  • Getting more funding does not equal success but it does tell us some things. And one of the biggest takeaways for me (and there are other indicators) is that the real estate market is ready for disruption and it’s coming soon.

Why we dismiss negative employee feedback

Simply put: We hate criticism.

Anything negative, anything critical — we fear it. We resist, push back, and build a wall around ourselves.

In fact, as humans, our brains are hardwired to resist negative feedback. Research show how our brains hold onto negative memories longer than positive ones — so the negative stuff always hurts more. We’re more upset about losing $50 than gaining $50… It’s the same when it comes to feedback. When we hear something negative, it sticks with us more than when someone tells us something positive about ourselves.

Our distaste for negative feedback is so strong that further research shows we drop people in our network who tell us things we don’t want to hear. In a recent study with 300 full-time employees, researchers found that people moved away from colleagues who provided negative feedback. Instead, they chose to seek out interactions with people who only affirmed their positive qualities.

  • It’s good to do self analysis and understand what our default reactions are to negative feedback and what might be the cause of those feelings. We should then break those down so we can better receive that feedback.
  • This applies not just to negative feedback but anytime our beliefs and biases are challenged. The more we can be unemotional and analyze what’s causing us to be challenged the more likely we are to better represent our argument and theirs and hopefully lead to a good discussion.

Fundamental challenges with public blockchains

There’s no question that blockchain technology has enormous potential.

Decentralized exchanges, prediction markets, and asset management platforms are just a few of the exciting applications being explored by blockchain developers.

Exciting enough, in fact, to raise over billions in ICOs and drive massive pricerallies throughout 2017. The hype is real.

Don’t get me wrong. I love the fact that blockchain “hype” is helping popularize it with mainstream users. Finally, I don’t get blank stares from people when I say “Bitcoin” or “Ethereum”.

However, there’s a flipside to this story that isn’t getting enough attention: blockchains have several major technical barriers that make them impractical for mainstream use today.

I believe that we will get there, but we need to be realistic as developers and investors. And the reality is that it could be many years before trustless systems are ready for mainstream use at scale.

Some of these technical barriers include:

Limited scalability

Limited privacy

Lack of formal contract verification

Storage constraints

Unsustainable consensus mechanisms

Lack of governance and standards

Inadequate tooling

Quantum computing threat

… and more.

In this post, I’ll walk through these technical barriers and share examples of solutions for overcoming them.

As developers, I believe it’s critical that we shift some of our focus away from shiny new ICOs to the real technological challenges standing in its way.

  • Ignoring the cryptocurrency hype I do believe no matter what happens there that the blockchain in some form will be with us moving forward. Even if at worst it’s us taking learnings from blockchain and applying it elsewhere.
  • This is a fantastic and extremely detailed walkthrough of some issues facing blockchain. Preethi also includes some of the early potential solutions to these issues.
  • If you are even remotely interested or just would like to learn a bit more about the details of blockchain this post is well worth your time.

PRESENTATION: Ideology

Some people claim that unit tests make type systems unnecessary: “types are just simple unit tests written for you, and simple unit tests aren’t the important ones”. Other people claim that type systems make unit tests unnecessary: “dynamic languages only need unit tests because they don’t have type systems.” What’s going on here? These can’t both be right. We’ll use this example and a couple others to explore the unknown beliefs that structure our understanding of the world.

  • Finally got around to watching this fantastic little talk from @garybernhardt from Strangeloop 2015.
  • This resonated so much with me. I started on a bad dynamic language (php) and then moved to weaker, typed languages (C#, VB), then back to dynamic languages that were slightly better (Python, Ruby, JS) and now I’m on to the statically typed languages that he mentioned (Purescript, Elm, Haskell and even Idris :)).
  • At this point for me it’s about having a language that provides the best sized category for problem. I’m finding that whatever time I lose up front I gain back later in lack of issues and even quicker debugging when issues do arise.
  • Also this is a non-sponsored plug for subscribing to the content Gary produces at Destroy All Software. His original videos on Ruby helped shape me about as much as any programming material I’ve ever come across. And his new content is fantastic.

The Four Big Risks

In the first edition of my book, INSPIRED, I discussed how successful products are valuable, usable and feasible, where I defined “feasible” as both technically feasible and business feasible.

While it’s easy to remember these three attributes, over the years I’ve come to believe that it was obscuring some pretty serious risks and challenges, and making it too easy for product managers to overlook some critically important work.

The technology feasibility risks can be substantial (especially today when so many teams are exploring machine learning technology) and in terms of the business risks, while these have always been substantial, I find that these are too often under-appreciated and under-estimated (or simply avoided) by the product manager.

  • First I must admit my bias here as we at RK are huge Marty Cagan fans. We started working with him at Workiva and we’ve built a friendship up with him and we always value his insight. He also sat down with us as we we’re starting RK and gave us a weekends worth of invaluable advice before we made the leap. We could not be more grateful.
  • All of that said those that know us know that we’re not BS’ers. We believe what we believe regardless of emotions. And we have always been big fans of Marty’s work and we align very much with his beliefs and writings.
  • So with all of that said I highly recommend you give his updated book “Inspired” a read if you have not. You can get more info here INSPIRED.
  • I also recommend subscribing to the SVG blog to get his posts as well as his partners. This is a shorter version of what they normally post but per usual we agree with it. Identify and managing risk is critical to your success. If you want an ever deeper dive just on managing risk on software projects then I highly, highly recommend “Waltzing With Bears: Managing Risk on Software Projects”. It is one of my favorite books on software. It’s required reading for those that want to work in the software industry. A shot out to my former co-worker Ryan Heimbuch for recommending it to me years ago.

--

--