About Me

The one and only Jorge Martinez #73

I was born in Miami, where I've lived almost my entire life – with the exception of some brief stints in Philadelphia and New York. I love Miami so much I decided to buy a domain name with the .miami TLD.

My professional-sounding interests are computer science, finance, and statistics but if you go by hours of lifespan spent, video games would undoubtedly be my #1 interest. Thankfully I became a quantitative trader instead of a Twitch streamer.

Within the expansive field of computer science, I particularly enjoy playing around with search-based artificial intelligence. By using human intelligence to create intelligent search heuristics, we can challenge if not beat the results that DeepMind, OpenAI, and others achieve through millions of hours of compute time across thousands of TPUs. What's more, the search-based AI is an interpretable model that can be understood, debugged, and explained to non-technical users.

Within the expansive field of finance, I particularly enjoy making money. But joking aside, my niche interest is market microstructure. It's fascinating because market microstructure touches on psychology, adverse selection, pure as-Smith-intended economics, and more. Lately I've been thinking about this "paper" by Bruce Knuteson: They Still Haven't Told You. While I think the quality of the writing is muddled, and the conclusions are by no means as certain as the author claims, it's an interesting puzzle to ponder: why are overnight returns greater than intraday returns by such a large margin? Are trading desks / investment managers reducing their book at the close in order to not carry overnight risk? Is it because of earnings reports? Is there some kind of trading ring taking advantage of lower liquidity overnight to manipulate prices upward?

Lastly, within the expansive field of statistics, I particularly enjoy studying statistical approaches to computational linguistics. There's a lot of cool problems in this space including document classification, text generation, part of speech tagging, etc.. I think it's ripe for an explosion of Bayesian methods, and Wikipedia assures me that people are using Dirichlet-Multinomial models for computational linguistics, so not long now.