I have learned only a minuscule amount about machine learning; just enough to get under the hood and to build something very rudimentary. As I was learning some of the underlying concepts and methods, a couple of things really struck a chord in my brain–with how applicable they are to, well, everything.

When you turn a problem into code, you are forced to be explicit about things that you otherwise may not have been. Specifically, you have to be explicit about what you wish to optimize. Working with startups is inherently a question of what you are trying to optimize. As an investor, this includes things like portfolio construction and allocations. As a founder, this includes managing your product, how you measure success, and all the endless tradeoffs that you have to navigate when dealing with customers.

The Training objective

Beating a professional human at Go was considered a milestone for artificial intelligence because Go has many possibilities to evaluate for a given move when compared to a game like chess. But in 2016, AlphaGo (developed by Google) won four out of five games against Lee Sedol, an 18-time world champion. What was so interesting about this is that AlphaGo made moves that seemed like mistakes to most human players. In a 2017 match against the top-ranked player, AlphaGo had one move in particular was criticized as a mistake, when the computer gave up what seemed an obvious advantage that could lead to a large margin of victory. Andrej Karpathy, an AI researcher who had spent time working on AlphaGo, later commented on that particular move:


This illustrates a critical point. Not everyone around the table is optimizing for the same outcome at all moments. I have been in tense boardroom discussions where this fact is thrown into sharp relief. The lesson I have tried to internalize from digesting this is to ask myself, “what am I optimizing for?” And once I have a suitably coherent answer to that, move on to, “what are the people around me optimizing for?”

To some extent, this is common sense that any book on human psychology would recommend (e.g. you have to understand what your customers need). But those Andrej Karpathy tweets continue to surface in my thoughts–especially around people whose behaviors I find confusing or when I have a particularly strong gut-level reaction to something going on with a portfolio company.

One thing to note: these different outcomes may not necessarily be adversarial, not completely in alignment. These can be the most subtle ones to debug. For example, imagine two cofounders measuring success using different metrics. Both the founders may be trying to build a company, but if one believes that user growth is the most important indicator and the other believes that ACV is the most important, they will have different opinions on what strategies to employ.

Often, and embarrassingly, I realize that I am optimizing for my ego. As I have noticed this in myself, I have started to also notice it in others. It goes without saying that this is not a good look. So when I feel like a situation has shifted out of alignment, and I’m struggling to put my finger on why… what am I optimizing for? And what are the other people here optimizing for?


A version of this post was originally published in Hacker Noon