The keyboard supports it, but your hands likely do not! You likely can't flex and extend your fingers simultaneously.
If you really wanted to do this for some reason (chording?) you could use too-small finger keys and press harder. But this would defeat the main point which is to be aggressively ergonomic.
In normal programming languages, I see static type systems as a necessary evil: TypeScript is better than JavaScript, as long as you don't confuse types with terms...
But in a logic, types are superfluous: You already have a notion of truth, and types just overcomplicate things. That doesn't mean that you cannot have mathematical objects x and A → B, such that x ∈ A → B, of course. Here you can indeed use terms instead of types.
So in a logic, I think types represent a form of premature optimisation of the language that invariants are expressed in.
Your invocation of Strong AI (in the linked paper) seems like a restatement of the Sufficiently Smart Compiler [1] fallacy that’s been around forever in programming language debates. It’s hypothetical, not practical, so it doesn’t represent a solution to anything. Do you have any evidence to suggest that Strong AI is imminent?
I routinely describe code that I want in natural language, and it generates correct TypeScript code for me automatically. When it gets something wrong, I see that it is because of missing information, not because it is not smart enough. If I needed any more evidence for Strong AI since AlphaGo, that would be it.
I wouldn't call it vibe coding; I just have much more time to focus on the spec than on the code. I'd call that a sufficiently smart compiler.
This is a very reductive definition of types, if not a facile category error entirely (see: curry-howard), and what you call "premature optimization" -- if we're discussing type systems -- is really "a best effort at formalizations within which we can do useful work".
AL doesn’t make types obsolete -- it just relocates the same constraints into another formalism. You still have types, you’re just not calling them that.
I think I have a reference to Curry in my summary link. Anyways, curry-howard is a nice correspondence, about as important to AL as the correspondence between the groups (ℝ, 0, +) and (ℝ \ 0, 1, *); by which I mean, not at all. But type people like bringing it up even when it is not relevant at all.
No, sorry, I really don't have types. Maybe trying to reduce all approaches to logic to curry-howard is the very reductive view here.
If your system encodes invariants, constrains terms, and supports abstraction via formal rules, then you’re doing the work of types whether you like the name or not.
Dismissing Curry–Howard without addressing its foundational and extricable relevance to computation and logic isn’t a rebuttal.
Saying "Curry-Howard, Curry-Howard, Curry-Howard" isn't an argument, either.
I am not saying that types cannot do this work. I am saying that to do this work you don't need types, and AL is the proof for that. Well, first-order logic is already the proof for that, but it doesn't have general binders.
Now, you are saying, whenever this work is done, it is Curry-Howard, but that is just plain wrong. Curry-Howard has a specific meaning, maybe read up on it.
Curry–Howard applies when there’s a computational interpretation of proofs — like in AL, which encodes computation and abstraction in a logic.
You don’t get to do type-like work, then deny the structural analogy just because you renamed the machinery. It’s a type system built while insisting type systems are obsolete.
Personally I would enjoy if TLA+ would have types, though, and TLA+ belongs to the land of logic, right? I do not know how it differs from the abstraction logic referred in your writing and your other whitepapers.
What is commonly used is a TypeOK predicate that verifies that your variables have the expected type. This is fine, except your intermediate values can still end up being of mis-intended values, so you won't spot the mistake until you evaluate the TypeOK predicate, and not at all if the checker doesn't visit the right corners of the state space. At least TypeOK can be much more expressive than any type system.
There is a new project in the same domain called Quint, it has types.
Practically, in abstraction logic (AL) I would solve that (AL is not a practical thing yet, unlike TLA+, I need libraries and tools for it) by having an error value ⊥, and making sure that abstractions return ⊥ whenever ⊥ is an argument of the abstraction, or when the return value is otherwise not well-defined [1]. For example,
div 7 0 = ⊥,
and
plus 3 ⊥ = ⊥,
so
plus 3 (div 7 0) = ⊥.
In principle, that could be done in TLA+ as well, I would guess. So you would have to prove a predicate Defined, where Defined x just means x ≠ ⊥.
[1] Actually, it is probably rather the other way around: Make sure that if your expression e is well-defined under certain preconditions, that you can then prove e ≠ ⊥ under the same preconditions.
This looks like you've defined the abstract lattice extension as a proper superset of the concrete semantics. That sort of analysis is entirely subsumed by Cousot & Cousot's work, right? Abstract Interpretation doesn't require static types; in fact, the imposition of a two-level interpretation is secondary. The constraints on the pre-level are so that the behavior of the program can be checked "about as quickly as parsing", while also giving strong correctness guarantees under the systems being checked.
Moving the whole thing to dynamic behavior doesn't tell us anything new, does it? Lisps have been tagged & checked for decades?
I have not defined any "abstract lattice extension" explicitly; which is nice, why would I need to know about lattices for something as simple as this? It is just a convention I suggest to get a useful Defined predicate, actually. Nobody can stop you from defining mul 0 ⊥ = 0, for example, and that might make sense sometimes.
I would suggest that abstraction logic compares to type theory as Lisp compares to Standard ML; but that is just an analogy and not precise.
My problem with your interaction on this forum, so far, is that it kind of ignores the practical (artisan) craft of programming. The use of static type systems by working engineers isn't math; it's more akin to "fast, formal checking". You can assert that you don't like types, or they're useless all day long; but, every working engineer knows you're wrong: static type systems have solved a real problem. Your arguments need to be rooted in the basic experience of a working engineer. Single-level programming has its place, but runtime checking is too error prone in reality to build large software. If you think this isn't true, then you need to show up to the table with large scale untyped software. For instance: the FORTH community has real advantages they've demonstrated. Where's your OS? Your database? Your browser? If you have real advantages then you'd have these things in hand, already.
Otherwise, you're just another academic blowhard using a thin veneer of formalism to justify your own motivated reasoning.
Strong opinion! Also, it seems you didn't read my interaction so far carefully enough. You also didn't read the summary I linked to, otherwise you would know that I fully approve the current use of types in programming languages, as they provide lightweight formal methods (another name for what you call "fast, formal checking"). But I think we will be able to do much better, we will have fast, formal checking without static types. Not today, not tomorrow, but certainly not more than 10 years from here.
I am not sure if I misunderstand you. Types are for domain, real world semantics, they help to disambiguate human language, they make context explicit which humans just assume when they talk about their domain.
Logic is abstract. If you implied people should be able to express a type system in their host language, that would be interesting. I can see something like Prolog as type annotations, embedded in any programming language, it would give tons of flexibility, but then you shift quite some burden onto the programmer.
Types for real-world semantics are fine, they are pretty much like predicates if you understand them like that.
The idea to use predicates instead of types has been tried many times; the main problem (I think) is that you still need a nice way of binding variables, and types seem the only way to do so, so you will introduce types anyway, and what is the point then? The nice thing about AL is that you can have a general variable binding mechanism without having to introduce types.
AL as described sounds like it reinvents parts of the meta-theoretic infrastructure of Isabelle/HOL, but repackaged as a clean break from type theory instead of what it seems to be — a notational reshuffling of well-trod ideas in type theory.
Given that I am an Isabelle user and/or developer since about 1996, similarities with Isabelle are certainly not accidental. I think Isabelle got it basically right: its only problem (in my opinion) is that it is based on intuitionistic type theory as a metalogic and not abstraction logic (nevertheless, most type theorists pretty much ignored Isabelle!). Abstraction logic has a simple semantics; ITT does not. My bet is that this conceptual simplicity is relevant in practice. We will see if that is actually the case or not. I've written a few words about that in the abstraction logic book on page 118, also available in the sample.
> — a notational reshuffling of well-trod ideas in type theory
Always fun to encounter disparaging comments (I see that you deleted the other one in the other thread), but I wrote this answer more for the other readers than for you.
I'm not a logician but do you mean that predicates and their algebra are a more granular and universal way to describe what a type is.. basically that names are a problem ?
Yes and no. Yes, predicates are more flexible, because they can range over the entire mathematical universe, as they do for example in (one-sorted) first-order logic. No, names are not a problem, predicates can have names, too.
so if names are not an issue, the problem with the usual static type systems is that they lack a way to manipulate / recombine user defined types to avoid expressive dead ends ?
The hyperreals don't fix the x/0 or 0/0 problems. Infinitesimals have a well-defined division and take the place of a lot of uses of 0 in the reals, but the hyperreals still have a 0, and the same problem posed in their comment exists when you consider division on the hyperreals as well.
I'm also curious what they intended, but there aren't many options:
1. The question is ill-posed. The input types are too broad.
2. You must extend ℝ with at least one additional point representing the result. Every choice is bad for a number of reasons (e.g., you no longer have multiplicative inverses and might not even have well-defined addition or subtraction on the resulting set). Some are easier to work with than others. A dedicated `undefined` value is usually easy enough to work with, though sometimes a single "infinity" isn't terrible (if you consider negative infinities it gets more terrible).
3. You arbitrarily choose some real-valued result. Basically every theorem or application considering division now doesn't have to special-case zero (because x/0 is defined) but still has to special-case zero (because every choice is wrong for most use cases), leading to no real improvements.
It's undefined because there isn't an obvious choice:
- At x:=0, every real is a reasonable limit for some path through the euclidean plane to (0,0). Ignore the 0/0 problem, then x/0 still has either positive or negative infinity as reasonable choices. Suppose you choose a different extension like only having a single point at infinity; then you give up other properties like being able to add and subtract all members of the set you're considering.
- However you define x/0, it still doesn't work as a multiplicative inverse for arbitrary x.
A good question to ask is why you want to compute x/0 in the first place. It is, e.g., sometimes true that doing so allows you to compute results arithmetically and ignore intermediate infinities. Doing so is usually dangerous though, since your intuition no longer lines up with the actual properties of the system, and most techniques you might want to apply are no longer valid. Certain definitions are more amenable to being situationally useful (like the one-point compactification of the reals), but without a goal in mind the definition you choose is unlikely to be any better than either treating division as a non-total function (not actually defined on all of ℝ), or, equivalently, considering its output to be ℝ extended with {undefined}.
Not that it directly applies to your question, ℝ typically does not include infinity. ℝ⁺ is one symbol sometimes used for representing the reals with two infinities, though notation varies both for that and the one-point compactification.
There is nothing practically usable right now. I hope there will be before the end of the year. Algebraic effects seem an interesting feature to include from the start, they seem conceptually very close to abstraction algebra.
Still somewhat janky. I use it on my work machine (since it at least seems a bit faster than using VirtualBox) and regularly run into issues where npm won't build my project due to the existence of symlinks [1,2]. wslg windows also don't yet have first-party support from the windowing system [3]. I also remember having trouble setting up self-signed certs and getting SSL working.
> First tier, high cost of living should be for high earners, singles or childless people, they should pay more taxes while second tier areas should offer UBI, and should generally subsidise people having children
That seems unfair to me. Why should singles and childless people subsidise people with children?
> Why should singles and childless people subsidise people with children?
My point was mainly that that kind of first tier city would attract those kind of people, not specifically that they should be targeted.
If you lived in a 2nd tier zone you shouldn't pay extra in taxes even if you're childless.
But specifically to your question, I do think that it's fair that childless people should pay more taxes, because having a stable population is a requirement for a stable society and single/childless people aren't doing their part.
That duty can be offloaded to parents with more children, but they should be compensated for that.
You can frame it whatever you want -- they pay more taxes, parents pay less taxes, parents get tax rebates, parents get higher UBI per child, the outcome is the same.
I do think that first tier cities leach and profit from the work of the parents, educators of the people who migrate there (and generally the whole area -- it takes a village to raise a child), profit from exorbitant property taxes so, I think the only way to solve this curse is higher taxes (if it's required) on those areas and subsidized living in less desirable places -- provided that those 2nd tier places produce competent citizens.
Progressive income tax already does this "taxing first-tier cities a higher percentage" thing. The tax bands are the same in all cities, so people living in the cheaper, low pay provincial cities already pay less tax as a proportion of income. This effect is very stark in comparing London to the rest of the UK.
Why should singles and childless people subsidise people with children?
children are the future of our society. even in today's system, children will be the ones paying your pension when you retire. putting that burden on parents alone is what is unfair. how would you like your pension to be measured based on the number of children you raised? maybe if you have no children you shouldn't get any pension at all?
do you expect not to get any pension at all and just live on your own savings? do you live on a farm growing your own food?
you can't live in this society without relying on others, and they can't live without relying on you. and unless you want humanity to die out, future generations need our support.
i was using it as a general term for whatever money you receive after you retire except personal investments or savings. i could not find out how 401ks work, specifically i did not find out whether 401k works like any other pension scheme or not. the wikipedia page on pensions does not say that 401k is not a pension, therefore i figure it is included in the definition.
i just checked the 401k page, and it says: a 401(k) plan is an employer-sponsored, defined-contribution, personal pension (savings) account.
so it's a pension.
but what we call it doesn't really matter. more important is the question if the payout depends on future generations paying in. in my brief search i could not tell whether the 401k is protected against that or if it even can be protected. but if it is i'll have to retract my claim. my apologies.
To choose the best rice cooker, consider these factors:
Top Brands: Zojirushi is often considered the best brand, with Cuckoo and Tiger as close contenders. Aroma is considered a good budget brand 1.
Types:
Basic on/off rice cookers: These are good for simple white or brown rice cooking and are usually affordable and easy to use 2.
Considerations: When buying a rice cooker, also consider noise levels, especially from beeping alerts and fan operation 3.
Specific Recommendations:
Yum Asia Panda Mini Advanced Fuzzy Logic Ceramic Rice Cooker is recommended for versatility 4.
Yum Asia Bamboo rice cooker is considered the best overall 5.
Russell Hobbs large rice cooker is a good budget option 5.
For one to two people, you don't need a large rice cooker unless cost and space aren't a concern 6. Basic one-button models can be found for under $50, mid-range options around $100-$200, and high-end cookers for hundreds of dollars 6.
References
What is the best rice cooker brand ? : r/Cooking - Reddit www.reddit.com
The Ultimate Rice Cooker Guide: How to Choose the Right One for Your Needs www.expertreviewsbestricecooker.com
Best Rice Cooker UK | Posh Living Magazine posh.co.uk
Best rice cookers for making perfectly fluffy grains - BBC Good Food www.bbcgoodfood.com
The best rice cookers for gloriously fluffy grains at home www.theguardian.com
Do You Really Need A Rice Cooker? (The Answer Is Yes.) - HuffPost www.huffpost.com
If the author/someone with knowledge of the language lurks here, there's these unanswered questions from the previous discussions that I'd interested about: https://news.ycombinator.com/item?id=34205220
You would need first-class functions (closures). This might work for the example you've given where we pass in a string - a non-linear value.
However, consider the case where we might partially apply a linear value - the value will be consumed and stored as part of the closure object - but linear values can only be contained in linear types - which would mean that the function itself must also become linear - and then you would only be able to use it once. These would be "one-shot" functions - because calling them would consume them. You would only be able to use them once - and if you wanted to reuse them, you would have to return a new function to replace the old one each time you call it.
Not the author, but both of those features seem unlikely to fit well with the rest of the language. I believe Borretti has commented on partial application in OCaml being a big pain because it can lead to weird type inference errors, and this was one of the motivations for not having type inference in Austral. Ditto pipeline operator, but I might be able to see unified function call syntax, maybe? f(a, b) == a.f(b). Would be curious to hear Fernando's thoughts on this.
The article says the birds make different nest configurations, with one incorporating a softer cup layer on top of the spiky layer made with the spikes and another configuration using the spikes as part of a dome on top of the nest.