In my experience, there is not a real use case for RDF or the other Semantic Web standards. You always have to write domain specific code to actually drive an application with RDF - so why not ditch RDF and write a domain specific data store too? How does RDF contribute, if it neither automatically drives any part of your application, or facilitates linking or relating of mulltiple datasets?
I first encountered this book when I was about six years old, and my mother was using it for a calculus class. I didn't understand the content by any means, but I enjoyed the cartoon characters. Despite not understanding it, the book helped instill in me a fondness for math.
They are not all flops. There are some MMO social environments that are huge, especially outside the US. Kids like it. Habbo.com for instance has many millions of users, mostly from Finland and the UK.
It probably doesn't help that books on the subject have desperately unhip titles like "Enterprise Integration Patterns". The introduction to that book is actually a great essay on when and how to use asynchronous messaging, and it is online here: http://www.enterpriseintegrationpatterns.com/Introduction.ht...
AMQP is very interesting. Check it out if you are building a distributed system or if you have a requirement for reliable asynchronous messaging. When I last looked into it about a year ago, the implementations (all two of them that I found - apache qpid and rabbitmq) were still a little rough but it looks like a lot has happened.
It's wierd how few people in the web world are aware of asynchronous messaging.
Can I point out that this explanation did not identify a root cause? How was the corrupt message originally produced?
If they know how it happened, it's not reflected in this article. The solution they describe addresses the detection of and recovery from future mysterious occurrences rather than identifying, understanding and eliminating whatever bug or condition caused this one.
Right, it likely wasn't a bug if it is a single bit and this only happens once in a blue moon. A machine can only transmit perfect 1's and 0's for so long before getting one wrong.
> we're adding checksums to proactively detect corruption of system state messages
I dobut that means they're actually adding them together, they're adding checksums to the process to ensure data corruption has no effect. Or am I misunderstanding you?
They use MD5 in other areas, but not that particular message. So now they will. Do people call hashes "checksums" in a colloquial sense? no one actually uses a "check sum" literally any more do they?
These tools are great and I love DabbleDB. But I think the powerful version of these same concepts is a 'Data Repository and Reporting' service. A service that lets you suck in data from many sources (csv, Web Feeds, Database Queries, Emailed Reports, SMS, mailing lists, etc). It would not be scared to hold all this data for me and keep it secure (and would charge me for usage so that people with lots of data pay their due). It would then give me a very flexible "Crystal Reports 2.0" meets DabbleDB interface to relate, dedupe, create conflict rules and schedule data refreshes. It would then let me create beautiful reports that can be embedded in a CMS, emailed, published like a Google Doc and versioned. If anyone is working on something like this, I have lots of ideas of how it should work that I would be happy to share.
1) Do you think this should be specific to a particular business process (like managing sales force performance, managing inventory, etc.) or open ended enough to just consume data in whatever form and let the user mold the application to their liking? I guess the short version of this question is: Will they know what they want?
2) You mention admin web interface to the app. Do you think this interface has a chance of being used by a non-IT people? Will business users have conceptual appreciation for data quality issues (great majority of problems with data analysis) like deduplication, incompleteness, errors, etc. Will creating clean data repositories (and therefore QA) be the core of this service or should the user be in charge at every point, allowing him/her to even get the "garbage in" and, what follows, "garbage out"?
3) Would the ability to create private data mashups with data provided by the service provider, other publishers, or publicly available be something of core importance or nice to have?
I have a lot of other questions, since I've started working on a web solution to this problem that would work in a way that is quite similar to what you've described. I would be great if you could share your responses/other thoughts further, either here or privately on my email (in the profile). Thx!
This guy expresses the problem well: http://inamidst.com/whits/2008/ditching