Hey, tangentially- I am CEO of Fabric, a company building orders of magnitude faster hardware accelerators for next-gen cryptography on the latest fab technologies.
Founder of AlphaSheets here -- we built this back in 2015 and developed it for 3 years. We built Python, R, SQL and full excel formula/hotkey/format/conditional formatting/ribbon compatibility. It was a long slog!
I wish you good luck and all the best. It's a tough field but a big market. And I still think the potential is there.
I always wondered what special genius people saw in Girard.
The observation that people want what each other want is not new and doesn't require philosophical genius to observe -- "keeping up with the Joneses" is what it's called by normal people.
What about "avoiding competition is good so chase blue oceans?" That's certainly a decent fund thesis, sure. Is it a genius one? Certainly seems like the returns come from the application of the maxim and not the maxim itself.
The real insight would've been to propose a fun way out of this "mimetic hell" for society. Girard's observation is that this usually takes violence against a scapegoat -- certainly not a fun way out.
The background. Modern world imagines it can p-hack itself straight into Utopia. Girard may be articulating old wisdom, trouble is modern world has lost its connection to old wisdom altogether.
> The real insight would've been to propose a fun way out of this "mimetic hell" for society. Girard's observation is that this usually takes violence against a scapegoat -- certainly not a fun way out.
Not sure people are familiar with the New Testament story arc?
> The observation that people want what each other want is not new and doesn't require philosophical genius to observe -- "keeping up with the Joneses" is what it's called by normal people.
"keeping up with the Joneses" refers to wanting what your neighbor has. If your neighbor gets a new car, then you also want to get a new car. It's about envy.
Girard is talking about wanting what your neighbor wants, which is profoundly different. E.g. your neighbor's daughter declares she wants to be a Princess, and then suddenly your daughter decides she wants to be a Princess. But your neighbor's daughter is not a princess. It is a contagion of desire - a preference cascade. And Girard claimed that all desire was mimetic in origin.
This challenges the notion that we are autononomous beings who follow our own inner course. But if all desire is mimetic, then who is patient zero and what is the first desire? This is where the theory of original sin comes in, the first desire represented by the serpent in the garden.
And it suggests that if someone can break out of that and want something genuinely novel for himself, then others will start wanting what he wants! This would be the miracle of the second desire.
Then we can ask, how do you break out of that chain of contageous desire? And that's when we get to Girard's Christianity. That is, we can't just decide "I don't want this. I want to be different." There must be a struggle in which we genuinely try to realize our desires. This struggle produces conflict and death, and the community places the death on the scapegoat, at which point the community is no longer struggling against each other, but against the scapegoat. In this way the scapegoat unifies the community, as they all blame him, and they all desire to kill him. This mechanism to extinguish conflict is religion -- it is "the sacred" in pre-Christian religions.
But now the kicker comes that Christianity comes along and declares that the scapegoat is actually innocent. The scapegoat was one man who was blameless. Oops! As this teaching seeps slowly into society, the more we empathize with the victim, the less the scapegoat is able to be effective at unifying the society, and becomes even a point of division!
This makes us more susceptible to mimetic contagion, not less, and on a much bigger scale than before Christianity. Thus empathy for scapegoats creates even mimetic rivalry as to who is the biggest victim - who is most like the scapegoat? Causing endless war and conflict in proportion to how empathetic society becomes. In such a society, the greatest violence originates in the desire to help the underdog, the outcast, and the unfairly put upon.
Thus Christianity does not free us from mimetic hell, but it "desacralizes" the post-Christian society. Girard was one of the first gloomy thinkers to predict that the seeming outbreaks of compassion in the West would lead to tremendous violence with no apparent resolution.
But there is an escape hatch, which is that even though we no longer want to kill the scapegoat, we may now be infected by him -- that is to die ourselves, wanting what he wants. The scapegoat can be the genuine second desire.
There is a lot to explore here. Suffice it to say, this is not just "keeping up with the Joneses"!
Yes, it's a terrible bastardization to try to turn Girard's theories into some list of 10 weird tricks of startup success.
IMO, that is not what motivates Peter Thiel. It would be like looking at Steve Jobs, noticing that Bob Dylan's music was important to him, and then writing blogs about Bob Dylan's secrets for product-market fit.
That's just gross - even if you don't like Bob Dylan. So even if you don't like Girard, the idea of this website is pretty gross.
For Peter Thiel, this is (I suspect) a religious journey, but one that makes him appropriately cautious of the dangers of psuedo-Christian empathy, in which case it would be an important source for his conservatism, but not an important source for his business success.
Caveat: the paper mainly focuses on the "standard quantum limit" which is the fundamental photon energy needed for the operations. If other things are taken into account (for example, modulation energy for the weights in this homodyne scheme, which scales with N² and not N, or the limits of the ADC) then the energy they are proposing is nowhere near possible. Furthermore, substantial alignment and packaging problems exist for free space optical systems, which prevents them from beating integrated approaches in the near term. In fact, it seems that Fathom Computing has potentially pivoted away from free space, based on the latest verbiage on their website, and they've been trying to get it to work for 3 years now.
However, it still presents an interesting case for the fact that the fundamental floor on optical scaling is absolutely tiny. It'll be interesting to see who wins in this space :)
Does the paper propose building these devices in free space? I got the sense this was all intended to be produced lithographically with waveguides, so alignment wouldn't be a problem.
The title is wrong about it being an integrated design. Here's an excerpt from the paper abstract:
"This paper presents a new type of photonic accelerator based on coherent detection that is scalable to large (N≳106) networks and can be operated at high (gigahertz) speeds and very low (subattojoule) energies per multiply and accumulate (MAC), using the massive spatial multiplexing enabled by standard free-space optical components"
I don't mean to be too negative here, but this is hardly a new development, so can someone clarify the novelty in this paper? Neural nets have been extensively demonstrated in memristor-based architectures [1] and several memristor-based training architectures have previously been proposed and tested [2]. The abstract's claim that "no model for such a network has been proposed so far" is prima facie blatantly false.
In any case, I have yet to see a conclusive, publicly explained solution to the significant system-level problems with memristor-based neural architectures, or indeed any analog neural architecture. The best claimed digital architectures are around ~250 fJ per multiply-and-accumulate (MAC) [Groq], and these generally involve 8-bit multiplication, which is extremely expensive in the analog domain thanks to the exponential scaling of power with precision levels. Even if you set aside the monstrous fabrication and device-level variance issues with memristors, DAC and ADC consume tens of pJ per sample in the realistic IP blocks that are commercially available. Although only one pair of DAC and ADC operations is required per dot product, this is still 40 fJ per MAC from DAC and ADC alone, assuming a 256x256 matrix multiplication and not taking other system-level issues into account. This limits memristors to a 5x over current digital architectures, and as nodes shrink, by the time memristors come out, this will be around a 3x. While a 3x is considerable, I don't think it justifies the moonshot-level deep tech risk that memristors will continue to represent. Many hardware companies [Tabula...] have failed attempting to reach something like a 3x in the main figure-of-merit, only to find that system-level issues get them a 1x instead. Besides, I'm sure digital architectures have more than 3x room for improvement- plenty of tricks left for digital!
I'm hoping for a breakthrough, because I am fundamentally an optimist, but memristors have been failing to deliver since 2008.
There's a pattern common to many unconventional storage and computational technologies: they stay behind the state of the art in the mainstream competitor, keeping up with it for a while but never catching up. Things will probably change if silicon stops improving.
You can go to Groq.com -- that startup claims to have 125fj per flop (and each mac is two flops thanks to marketing logic). Started by 8 out of 10 founding TPU team members.
Although probably not sufficient for AGI, network architecture is essentially guaranteed to be important, because of both ample empirical evidence of the importance of architectures and ample reason, from facts about numerics, to believe that it is important.
In the first category (empirical evidence),
- The discrete leap from non-LSTM RNN to LSTM network performance on NLP was essentially due to a "better factoring of the problem": breaking out the primitive operations that equate to an RNN having "memory" had a substantial effect on how well it "remembered."
- The leap in NMT from LSTM seq2seq to attention-based methods (the Transformer by Google) is another example. Long-distance correlations made yet another leap because they are simply modeled more directly by the architecture than in the LSTM.
- The relation network by DeepMind is another excellent example of a drop-in, "pure" architectural intuition-motivated replacement that increased accuracy from the 66% range to the 90% range on various tasks. Again, this was through directly modeling and weight-tying relation vectors through the architecture of the network.
- The capsule network for image recognition is yet another example. By shifting the focus of the architecture from arbitrarily guaranteeing only positional invariance to guaranteeing other sorts, the network was able to do much better at overlapping MNIST. Again, a better factoring of the problem.
These developments all illustrate that picking the architecture and the numerical guarantees baked into the "factoring" of the architecture (for example, weight tying, orthogonality, invariance, etc.) can have and has had a profound effect on performance. There is no reason to believe this trend won't continue.
In fact, there are some very interesting ways to think about the principles behind network structure -- I can't say for sure that it has any predictive power yet, but types are one intuitively appealing way to look at it: http://colah.github.io/posts/2015-09-NN-Types-FP/
Shameless plug: I am a founder of AlphaSheets, a company working on solving all of these issues. It's quite scary (building a spreadsheet is like boiling an ocean) but our mission feels very meaningful, we're well-funded, and we are now stable and serving real users.
A big problem in finance workflows is that there is a tradeoff between several factors: correctness, adoption / ease-of-use, rapid prototyping, and power. We aim to solve several of these major problems. We've built a real-time collaborative, browser-based spreadsheet from the ground up that supports Python, R, and SQL in addition to Excel expressions.
Correctness is substantially addressed, because you don't need to use VLOOKUP or mutative VBA macros anymore. Your data comes in live, and you can reference tables in Python as opposed to individual cells. A lot of operational risk goes away as well, because the AlphaSheets server is a single source of truth.
We help with adoption of Python and adoption of correct systems as well. You can gradually move to Python in AlphaSheets -- many firms are trying to make a "Python push" and haven't succeeded yet because the only option is to move to Jupyter and that's too much of a disruption. It's less brittle than Excel. The important keyboard shortcuts are there.
And finally, the entire Python ecosystem of tools (pandas, numpy, etc.) and all of R is available, meaning that many pieces of functionality that had to be painstakingly built in-house in VBA and pasted around are simply available out of the box in well-maintained, battle-tested packages.
Our long term plan is to broaden our focus into other situations in which organizations are outgrowing their spreadsheets. We think there's a lot of potential with the spreadsheet interface but the Excel monopoly has prevented meaningful innovation from happening. For example, every BI solution tries to be "self-serve" and "intuitive" these days, but encounters resistance from users who end up sticking with spreadsheets due to their infinite flexibility and immediate familiar appeal.
We hope to bring the spreadsheet in line with the realities of the requirements of the modern data world -- big data, tabular data, the necessity of data cleaning, data prep / ETL, the availability of advanced tooling (stats, ML), better charting -- because we think there's a giant market of people waiting to move to a modernized but familiar spreadsheet.
If there's anyone interested, contact me, because I'd be very interested in chatting! I'm michael at alphasheets dot com :)
Shameless plug: I'm a founder of Alphasheets, a company seeking to solve problems like these! I couldn't resist replying after seeing these comments.
We make a collaborative (Google Sheets style) spreadsheet with Python and R running in the sheet. You can define functions, plot using ggplot, embed numpy dataframes, matrices and all that good stuff. We don't let people use macros, all the code runs in cells because we think macros are too brittle. You can check out the website at http://alphasheets.com .
We're seeing that many enterprises (for example, in finance) that have Excel power users are moving to Python because of limitations like these, and are running into adoption issues because people like spreadsheets so much. That's generally where we come in and provide a bridge from the Excel world to Python through a more friendly frontend.
We're also seeing that Alphasheets can help a lot with shortening feedback cycles on more sophisticated data analyses- Excel is the most popular self-serve analytics tool out there, but doesn't cover cases where you need Python/R/fresh data.
This is very nice. Problem is, there are sooo many more features in Excel you'll have to copy to get me to move. If you ask "which ones" I'll say "all of them". I'm a power user. I build huge dashboards and analytical tools in Excel. The thing I hate most is that all my work goes into a file that I have to pray works on the other persons computer.
The product is great. But you guys will need to launch a fully feature rich desktop client, which can sync with the cloud.
Else its the same thing mentioned in the previous comments. You would build a web app with 5% the features of excel, and the moment somebody reaches use case that can't be solved with your tool, they will have to switch to excel. If they have switch every second time they use your product. They might as well do all their work in Excel to begin with.
You have to be feature compliant with excel and you can't do that on a web app alone.
I looked at the demo video, it didn’t appear to allow embedding of anything that R would output but rather just appended data that was manipulated within R back to the output of the original query.
Mode has python notebook integrated and can read in your queries as datasets and then embed whatever you render from python. You cannot modify the original data of the query within the python notebook and then use mode's stock visualizations from the data you created in python.
This is a pretty big difference between the two if I'm understanding it correctly.
The GUI/Dashboarding of this looks way better than mode.
I'm curious how Cluvio supports filters such as drop down menus, filters based on dynamic queries (vs. hardcoded), and supporting drop down filters with multi select. (Mode does these things very poorly, chartio does them pretty well).
i.e. the R step is injected, takes data as input and produces data (in the form of data.frame, vector of values or primitive values) as output.
Re: the filters. We support time-based filters very well with a nice UI (custom time ranges, time ranges relative to today, additional comparison time range). Custom value filters are currently in beta and once launched (in couple weeks) will support multi-select as well as values based on dynamic queries.
There are some similarities conceptually (SQL-based analytics, use of R or Python to give additional capabilities).
There are more differences in the visualisation capabilities, user experience and pricing (esp. with larger number of business users that should make use of the analytics results, which are free in our case).
Frontend Developer (React) | AlphaSheets | up to $150k (depends on level of hire and equity tradeoff) + equity | Contractors welcome; remote or onsite | Bay Area
You can check more examples out at alphasheets.com.
AlphaSheets marries the capabilities of spreadsheets (simple WYSIWYG calculation interface) with the full power of programming. We've gotten excitement from wall street quants, marketing analysts, pharmaceutical scientists, insurance analysts. Our broader audience is the burgeoning population of people who can write small bits of code but aren't full-on software engineers. We envision a future where tens of millions of people with these skills see AlphaSheets as their tool of choice for data analysis.
Experience is a plus, but not a must as long as you're smart. We have a React+ES6+Flow / Haskell stack. We love seeking leverage through good architecture, languages (Haskell!), frameworks, and tools. (Doesn't matter at all for this position if you don't know Haskell.) We're well funded (big seed round) and have 4 years' runway so we're not going away overnight.
Our culture is one of efficient, open communication and rational decision making. You'll be joining a founding team of 4 guys out of MIT.
Email our CTO (Anand Srinivasan) at anand (at) alphasheets (dot) com
Would love to share notes if you're up for it!