Hacker News new | past | comments | ask | show | jobs | submit | antononcube's comments login

Thanks for sharing this!

I only "followed" so far the properties given at Wolfram Community:

https://community.wolfram.com/groups/-/m/t/3347182


In this presentation we discuss:

- Two ways of plotting chessboards

- Knight's tour graphs

- Hamiltonian paths

- Animations of knight's tours


Showcasing Raku's tools for exploratory data analysis over programming languages data.

Here is the presentation notebook: https://github.com/antononcube/RakuForPrediction-blog/blob/m...

Here is the corresponding Wolfram Language (aka Mathematica) notebook: https://community.wolfram.com/groups/-/m/t/3180327


It seems you are describing how functional parsers (aka parser combinators) work.

(BTW, there is a "Parser combinators" section in the featured post/article.)


I believe the proper term for what i am describing is a recursive descent parser. With which it is also quite doable to generate proper error handling and even recovery. Some form of this is used in almost every production language I think.

It has been years since I've written a proper parser but before that every time I had to write one I tried the latest and greatest first. ANTLR, coco/r, combinators. All the generated ones seemed to have a fatal flaw that hand writing didnt have. For example good error handling seemed almost impossible, very slow due to Infinite look ahead or they were almost impossible to debug to find an error in the input schema.

In the end hand crafting seems to be faster and simpler. Ymmv.

My point about the article was mostly that all the formal theory is nice but all it does is scare away people, while parsing is probably the simplest thing about writing a compiler.


The bad news about those is that it's easy to mindlessly create a parser that runs on exponential time.

The good news is that this happens in the grammar definition. So once you define your language well, you don't have to watch for it anymore.


Insightful!

Do you know of any "large scale" research on this? I.e. analysis of multiple related projects and/or of "real life stories."

(I agree regardless.)


I don't know about any real-world study. But there are people complaining about it from time to time, and it's quite obvious from the theory.


Very long write-up! If you read it and like the content, you might consider ditching Python and start using Raku a lot and often.


Yes you can do that (easily) with Wolfram Language (aka Mathematica.)

Here is an example:

    mat1 = Table[
       RandomChoice[{
           Quantity[RandomReal[], "Meters"],
           Quantity[RandomReal[], "Seconds"],
           Quantity[RandomReal[], "Meters/Seconds"],
           Quantity[RandomReal[], "Meters/Seconds^2"]
         }] , 3, 3];
    mat1 // MatrixForm
    
    mat1 . Transpose[mat1]
See the corresponding screenshot: https://imgur.com/aP9Ugk2


I whipped up this in raku:

  use Physics::Measure :ALL;

  sub infix:<·>(@x1, @x2) {
    die "Incompatible dimensions."
            unless @x1 == @x2[0] && @x1[0] == @x2;

    [for ^@x1 -> $m {
        [for ^@x1 -> $n {
            [+] @x1[$m;*] >>*<< @x2[*;$n]
        }]
    }]
  }

  my @m = [[1m,2m],[3m,4m]]; 

  say @m · [Z] @m;     #[[5m^2 11m^2] [11m^2 25m^2]]
Since Physics::Measure is strong on illegal combinations and since there are not many realistic random combinations of Units (s^2 anyone) I have not gone random for my example.


Wolfram Language (aka Mathematica) is the best for doing scientific computing with physical unit dimensions.

See: https://reference.wolfram.com/language/guide/Units.html

> The units framework [of Wolfram Language] integrates seamlessly with visualization, numeric and algebraic computation functions. It also supports dimensional analysis, as well as purely symbolic operations on quantities.


Note that Mathematica units impose a very large runtime penalty, making them unsuitable for a lot of applications


The applications that are WL-units suitable are much more than the WL-units unsuitable ones you refer to.

It is true that I (and others) avoid using Wolfram Language (WL) units. Or try to get the computation expressions units-free as quickly as possible. But that is also a normal procedure when crafting computational workflows.


TL;DR (via an LLM)

## SUMMARY

Steven Johnson discusses transforming his book "The Infernal Machine" into an interactive game using AI, highlighting advancements in AI context windows.

## IDEAS

- Interactive games can be created from narrative texts using AI and a 400-word prompt.

- AI can transform linear narratives into immersive adventures, impacting education and entertainment.

- The context window of AI models has dramatically increased, enhancing their capabilities.

- Long context windows allow AI to maintain narrative coherence and factual accuracy.

- AI models can now manage parallel narratives and timelines in interactive simulations.

- The expansion of AI context windows has improved conversational fluidity and factual reliability.

- AI's ability to personalize content is enhanced by long context windows.

- Long context models can provide insights from large corpora of documents.

- AI can now simulate complex cause-and-effect chains in narratives.

- Authors can test AI's understanding of their work by uploading unpublished manuscripts.

- AI can identify narrative techniques like foreshadowing in texts.

- Long context models enable AI to track both factual and fictional timelines in games.

- AI can now provide personalized insights based on user-uploaded documents.

- The ability to swap information in and out of AI's context window is a significant advancement.

- AI models can now hold millions of words in their context, enhancing their utility. - AI can serve as a "second brain," recalling facts and ideas from a user's history.

- AI can help identify patterns and simulate responses in organizational archives.

- Long context models can enhance collective intelligence in organizations. - AI can provide expert insights by drawing on an expert's entire career archive.

- Organizations may benefit from curating diverse sources for AI context windows.


@ars is not saying that; @ars is stating a minimum range threshold to buy an Electric Vehicle (EV).

I completely understand why would someone have such a threshold in USA.

For example, a round trip starting and finishing from/to a place that is a few hundred miles away from a big city. With ICE vehicle fueling during the trip is not a no-brainer. With EV, charging during the trip would take non-trivial planning if EV's range is, say, 300 miles. With 600 miles range it is easy -- just do it beforehand.


R and its ecosystem have some unbeatable features, but, generally speaking, the "old", base R is too arcane to be widely useful. Also, being "made by statisticians for statisticians" should be a big warning sign.


Despite being made by statisticians, I ironically find that munging R packages together for certain classes of analysis such a slog that it prevents me from doing the actual statistical thinking. Sometimes the plots fall behind commercial packages, sometimes the diagnostics, and sometimes you have to combine multiple incompatible packages to get what a commercial package can do.

(Survival analysis and multilevel modeling comes to mind.)


This is so far from my experience. For me, R codes do tend to skimp on polish so it takes longer to get to the initial figure, but that is made up for by enabling me to see the data from a much richer perspective (to some extent because I had to think harder about what the output meant) such that I can find all the bugs in the data and in the underlying experimental plan: the stuff which makes it clear all the commercial reports are mostly useless anyway because Garbage in -> Garbage out


On the contrary, I find base R less arcane than the current de jour python libraries which copied it


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: