I think you are going to confuse people by framing this as "time series" and then focussing on reinforcement learning. A lot/most of your framing is around RL.
from a first glance and a read through your roadmap, this does not feel suitable for people who know what they are doing with RL. It also does not feel suitable for people who don't know what they are doing with RL.
Eh I feel like may be worth cutting some slack here..
It sounds like you’d like the title to be prefaced with “one time that”.
I don’t think anyone reading this says, hey gans are done for. I doubt the authors think that and I (clearly) don’t get the impression that’s what they mean.
Even the first sentence of the abstract:
> We show that diffusion models can achieve image sample quality superior to the current state-of-the-art generative models
Has the word current in front of state of the art generative models.
rather famously, one needs an intense operation like matrix multiplication to get cpu bound (an operation that has many enough arithmetic operations per data element, for I/O to not dominate).
There are a lot of different definitions of intrinsic value. But it often has nothing to do with how much someone will give you.
There is the asset, an estimation of what the asset is truly valued at (intrinsic), and then finally the market price. In the case of the painting, the intrinsic value might be a function of a study of trends in demands of paintings, the artist, etc. The market price may or may not correctly factor in that information to price the painting well.
The fact that insider information does exist should paint a clear picture of why someones calculation of intrinsic value might be different from the market price.
Lastly, if someone gave me Hertz stock, I'd sell it ASAP and invest the money in something else.
actually, I think this is exactly what misappropriation theory is supposed to account for. can you explain more around why it is well within their rights? I don't think it is.
Svms are, by default, linear models. The decision boundary in the Svm problem is linear and since it’s the max margin we may enjoy nice generalization properties (as you probably know).
You probably also know that decision tree boundaries are non Linear And piecewise. It’s not so straightforward to find splits on continuous features.
Ie If the data is linearly separable then why not. Even using hinge loss with nns is not uncommon.
You probably see gbms winning a lot of competitions compared to svms because a lot of competitions may have a lot of data and non linear decision boundaries. some problems don’t have these characteristics.
https://ieeexplore.ieee.org/abstract/document/1451721