On page 29, Knuth says, "We can regard both variables and clauses as active agents, who continually tweet to their neighbors in this social network." This in the context of statistical physics.
Seeing Knuth talk about tweets feels weird.
On another note, anyone actually run into a random k-SAT problem in "real life"?
On another note, anyone actually run into a random k-SAT problem in "real life"?
Because there's a lot of research into solving SAT "pretty quickly", one of the half-reasonable ways to approach an NP-complete problem is to find an efficient reduction and use a premade SAT solver/heuristic. I don't think nearly as much work has gone into that sort of thing as, say, integer programming (http://en.wikipedia.org/wiki/Integer_programming), but I remember reading something about it a couple of years ago.
Of course, when people realise their problem is NP-complete they tend to go a different way with things. The usual answers are:
1. Problems in practice are small enough for brute-force (or a simple backtracking search)
2. We probably don't need optimality, just use some kind of heuristic or metaheuristic.
From what I've heard, researchers are actually rejoicing, once they are proven a problem is in NP (instead of something worse or unknown status): because that's means it's quite likely that most of it's instances are tractable and we know lots about how to solve NP problems.
It's a quite common reaction in AI, yeah: once we can reduce the problem to some standard NP-problem solver, we're probably in good shape.
I believe SatPlan (1992) initiated that trend, by showing that compiling classical planning problems to SAT can get significant speedups over traditional planning algorithms in some cases. This ties into another, mostly informal hallway debate so far, over whether AI needs a replacement idea of "AI-completeness" to specify the problems that are really hard from an AI perspective. I wrote a short opinion piece on that a few years ago: http://www.kmjn.org/notes/nphard_not_always_hard.html
A more common problem than SAT not being fast enough is the reduction being intractable. It's ok if it's "big" (SAT solvers can solve for millions of variables), but it's easy for naive encodings to generate truly absurd blowups, gigabytes or more, which can make it intractable to even state the SAT problem, i.e. write it to disk or send it over a pipe. If you avoid that problem, the actual SAT-solving step is usually fast. I believe that's where some of the interest in alternatives to SAT comes from, as compilation targets, so to speak, that are easier to generate non-blown-up code for.
SAT-heritage grounding targets are still common in logic programming, though, e.g. http://potassco.sourceforge.net/ uses a SAT-like approach, though one specialized for answer-set programming rather than directly targeting SAT.
I've got a bit of a mirror image of that: I use these "AI-heritage" finite-domain solvers like SAT, ASP, clp(fd), but have been wondering lately if tools from other communities, such as integer programming, could be more useful to me for some applications. :)
My impression is that there's a little bit of tool-choice segregation by community, with OR people, AI people, software-verification people, and PLs people each having their own favorite tools, and not as much overlap as there could be.
More than weird, will a 'tweet' have relevance 50 years from now? Even a social network? Seems using things that have a temporal cultural context is a bit iffy.
Granted, I haven't mustered the courage to begin reading TAOCP, but my impression is it's a timelessly relevant masterwork for the man.
> More than weird, will a 'tweet' have relevance 50 years from now? Even a social network? Seems using things that have a temporal cultural context is a bit iffy.
I don't think Knuth is under any illusions about staying relevant: The original editions had code examples for a machine having 6-bit bytes and used self-modifying code to store the return address of subroutines rather than a stack.
In later editions he updated the examples, but even with respect to algorithms the new fascicles make references to quite recent papers and results. The books as timeless as they are, are products of their time more so than most books on mathematical topics.
The whole thing is filled with dry humor. If you’re not chuckling when you read it, you’re not paying enough attention. :-)
First example from this new section about satisfiability, the epigraphs:
> “He reaps no satisfaction but from low and sensual objects, or from the indulgence of malignant passions.” – David Hume, The Skeptic (1742)
> “I can’t get no ...” — Mick Jagger & Keith Richards, “Satisfaction” (1965)
Or an example from the text, from a few pages later on:
> “Start with any truth assignment. While there are unsatisfied clauses, pick any one, and flip a random literal in it.”
> Some programmers are known to debug their code in a haphazard manner, somewhat like this approach, and we know that such “blind” changes are foolish because they usually introduce new bugs....
"Tweet" is such a good replacement for "asynchronously broadcast" that it won't matter whether Twitter is still around. The meaning is clear from context. It would not surprise me if "tweet" became a reserved word in computer science research.
The word tweet has existed for over a hundred years. Assuming that birds still exist in 50 years, its meaning should be pretty easy to infer in that context.
I'm betting "social" and "network" will still be recognizable words in 50 years too.
The big issue for me are the bitmap fonts. Fortunately he used a modern dvips, so you can run it through "pkfix" to get a version with scalable fonts. (Once it's converted to pdf, it's more challenging to fix.)
All of the fonts in that postscript file are bitmaps. Knuth started with metafont, but the file he distributed only contains bitmaps.
If you look inside the file, you'll find the comment "DVIPSBitmapFont". And if you convert it to PDF and zoom in, you'll see that the fonts are bitmaps. This is typical for TeX files generated by dvips prior to the widespread adoption of the Type1 equivalents for the computer modern fonts.
Fortunately, the file was generated by a recent version of dvips, so the "pkfix" script can be used to rewrite it with Type1 fonts substituted for the bitmapped fonts. I'd recommend doing this before converting it to PDF.
It's a bit harder to fix this after PDF conversion. A few years ago, I wrote some Java code to do it (inspired by "pkfix-helper"), but it worked off of font metric signatures so it's not very accurate for small font subsets (e.g. exponents). To fix this, I need to actually look at bitmaps (or ask the user), and I never got around to implementing that.
I suppose it's a minor issue, but it is a pet peeve of mine. I've come across a lot of papers on the net in PDF or PS format that look unnecessarily bad when viewed on a computer screen.
OK I didn't know enough about this format to look in it for the fonts. But I zoomed in as far as I could in Evince and everything stayed smooth so I just assumed it wasn't bitmapped.
Seeing Knuth talk about tweets feels weird.
On another note, anyone actually run into a random k-SAT problem in "real life"?