I had the same result from most papers. My personal conclusion is that when people get a winning strategy they don’t publish. I personally put my money where my mouth was for a few years:
That being said, I try to be honest too. This can disappear any time and the model I use may only be good in this environment. I do not know. I think that’s the challenge with papers, you don’t honestly know when or if the strategy works. It clearly won’t forever regardless.
That’s why I don’t share my exact method. And after doing all the research myself AND trying to sell my algorithm. I honestly don’t think the industry knows what it’s doing either. People are worried about sharpe ratios and all this BS stuff. The reality is for these models you mitigate risk via temporary and ever changing methods. Can’t really publish on that.
> I had the same result from most papers. My personal conclusion is that when people get a winning strategy they don’t publish.
This is kind of obvious to me. It is also the reason the OP posted their results. I'm sure if they found one strategy that worked, after putting that much time into their research it would be really stupid to announce to the whole world that it works.
On the other hand, in the trading world where everyone is a competitor, you might want to deliberately introduce some confusion - but it looks like plenty of actors are doing this anyway.
I hope that your personal implementation of your strategy takes into account 2008 ;). Because, hooo, boy, your 20% YoY returns sound great, but may not be taking enough risk of a similar collapse into account.
It does take those risks into account and I tested it on 2006-2008 previously. However, there is little point in testing a collapse with the same model. If the stock market collapses anyone investing would be out of luck. Your standard models won’t be able to track that, because it’s usually something like a “black swan” event[1].
Instead you’d want some sort of meta model. In either case. I’m getting 100% YoY returns when I augment my model, in real life. I think even a 50% loss one year wouldn’t be the end of the world.
Just a guess, but: if you come up with a successfull algorithm, you still need to have money that can be invested in order to use the algorithm. So maybe someone else with 100x more money to invest would pay more for the algorithm than you could earn from it in a lifetime.
Is your "winning" strategy so different from other social data / sentiment analysis approaches? There is some novelty with how you weigh the sentiments (based on how much of an insider or expert they are) but I am sure existing trading strategies weren't just taking a dumb average of a twitter firehose either. Shouldn't it be easy for some large firm to replicate your approach and make the alpha disappear?
> Shouldn't it be easy for some large firm to replicate your approach and make the alpha disappear
First, I don't think alpha ever fully disappears.
Second, after speaking with twenty or so firms very few are using sentiment directly. Those that do, I suspect don't take the additional steps to build complex NLP based systems and weight insiders/experts. Even if you weight experts, the methods for doing so are also complicated (cross check against LinkedIn should be easy enough, but also limits information).
Anyway, I personally haven't seen much difference in the back testing.
It seems that you consider the sharpe ratio to be not worthwhile. Would you be able to elaborate on that point? As someone who is getting into algotrading, I’m currently using the sharpe ratio to quantify risk, but would like to hear another take on it.
Winning strategies are not published generally speaking. You wont see many business people blogging in detail about how they went for 0 to 1. Which is shy most blogs about business are a load of BS. Reading your statement and this article confirms my long suspicion that relying on internet strategies is a bad startegy.
There are definitely winning strategies. The issue is whether they can be reliably identified beforehand (thus rewarding skill), or whether people who implement them are just lucky (regardless of whether they believe they were skilled or not).
Well, if you get 30 persons to flip a coin six time, there is good chance one of them will get all tails or all head. Now ask him his strategy for such amazing coin flipping skills! (example taken from the book "statistics done wrong"[1]).
I would be interested in seeing a total distribution of hedge funds return, not just outliers.
I understand what you are saying, but the equivalent to the Medallion fund would be something more akin to winning that same coin flip 30 times in a row. They have been running approximately 30 years, beating market returns by a significant margin, year after year. The 40% average is net of fees, so the overall return of their strategy has actually been higher than that. The odds that they have accomplished these returns through luck alone are astronomically low.
The trick is this: the chances that a specific fund will do well that long via luck are very low, but the chances that there exists a fund among all that exist that has done well via luck are quite high.
I can tell you haven't actually done the calculation. There's only been about 20000 hedge funds in total over history. The odds of random chance producing Medallion's track record with that many draws are actually very low.
This is true, though I don't know how many edge funds operate. I guess some people really have working strategy!
The example I gave was more of a word of caution regarding past performances as indicator of expertise, but I guess I made it sound more generalizable than needed!
That would be the reversion to the mean[1]. This is a term I really dislike because it makes it sounds as if there is some sort of equalizing force making over-performers under-perform later. This is more the following: if you overestimates the expectation of a random process, you are going to be disappointed.
In our case, this does not makes the "hot streak" any less probable when you start looking, for a specific edge fund. It is true it would be interesting to select a group of over-perfomer and study their future return to know if past performances are a good predictor of future performance. I feel you would probably get mixed results!
Although as I said in the sibling comment, you are probably right, and no amount of statistics could explain performances seen in this particular edge fund.
It's probably still the Medallion Fund. Except it doesn't take outside money.
One characteristic of a good money manager is that he knows when to start rejecting money. Large AUMs tend to converge by necessity towards index funds (or something underperforming an index fund).
https://austingwalters.com/backtesting-our-100-yoy-profit-ge...
That being said, I try to be honest too. This can disappear any time and the model I use may only be good in this environment. I do not know. I think that’s the challenge with papers, you don’t honestly know when or if the strategy works. It clearly won’t forever regardless.
That’s why I don’t share my exact method. And after doing all the research myself AND trying to sell my algorithm. I honestly don’t think the industry knows what it’s doing either. People are worried about sharpe ratios and all this BS stuff. The reality is for these models you mitigate risk via temporary and ever changing methods. Can’t really publish on that.