Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One geometric model that actually handles numerical approximations is called exact geometric computation. It's pretty interesting and starts by explicitly tracking arithmetic errors in the operations you do. At its core it's a mix of interval arithmetics and multi-precision floating point. This is something you can do since the floating point implementations on most (all?) CPUs give you strong guarantees on the error bound of floating point arithmetic operations and these are usually within 1 ulp (unit in the last place) of the correct result. Now you have an interval for the result and can build your geometric constructs on interval arithmetic rather than classic, single point scalar arithmetic.

Geometry works with constructs, not numbers. An intersection is a construct, collinearity is a construct, orientation / handedness is a construct, etc. If you define your fundamental constructs to be rigorous then the algorithms you build on top will also be rigorous. If we consider the article's inside-outside test example, this result would be computed based on the intersections (intervals) we've computed. Since we know for sure the correct result is within our interval of error then inside-testing boils down to comparing intervals and we know there are three possible situations here: 1. [a, b] < [c, d]; 2 [a, b] > [c, d]; 3. [a, b] overlaps with [c, d]. In case 1 and 2 we have a robust conclusion and in case 3 we recompute the whole predicate with higher precision until it becomes decidable.

The golden standard for exact geometric computation is CGAL [1]. All the research around it is very interesting and refreshing to read.

[1] cgal.org




Hi, author here. I looked into that side of things, but CGAL only offers exact precision implementations for lines and circular arcs. The fact that Bézier curves are left as an exercise for the reader is further proof of the disconnect between computational geometry and modern computer graphics.


Fair point. At the same time the exact computation paradigm is proof that problems like these can be solved in a rigorous way if you start with the right foundation.

By the way thanks for writing the article, it was a refreshing read.


I'm curious; I've played with OpenSCAD a little, which uses CGAL, and found it to be painfully slow (with no particular point of comparison). Do you think you have a way to do similar CSG calculations faster, or at least trade speed for accuracy or something?


It's very plausible that these techniques could be used to fix the output of a CSG library that uses floating point math, but I'm not sure what the specifics would look like. If anyone has ideas in that vein, I'd be very interested to hear them.


I was thinking along the lines (based on skimming documentation) of CGAL using arbitrary precision integer based rationals, which are slow, and using floating point with the error correction might potentially be faster.

Unfortunately, it's probably way beyond my ability to delve in to it.


'Exact real arithmetic' is probably the best starting point for what you're describing here. It does have plenty of application areas besides computational geometry. The issue with simple interval arithmetic is that intervals will widen when chaining multiple computations. To cope with this and obtain rigorous answers, one needs to set a required bound on the final result and do arbitrary-precision computations as entailed by that final bound.


I believe Android's built-in calculator app takes this approach: https://dl.acm.org/doi/pdf/10.1145/2911981

It also uses rational numbers and arbitrarily large integer arithmetic.


I wonder if anyone has thoughts on the approach taken here ("Sound and robust solid modeling via exact real arithmetic and continuity"): https://icfp19.sigplan.org/details/icfp-2019-papers/15/Sound...


I work on developing better tools for solving computational geometry problems robustly and efficiently, so I got quite excited when this paper first appeared.

However, while the type theoretic developments based on Abstract Stone Duality is interesting, they mostly ignore the problem of efficiency by simply representing every real number as a Dedekind cut. Thus, it doesn't scale without significant advances in compiling real arithmetic. A problem I'm working on presently, but it might take a few years...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: