Stream computing is a new paradigm necessitated by new data-generating scenarios, such as the ubiquity of mobile devices, location services, and sensor pervasiveness. A crucial need has emerged for scalable computing platforms and parallel architectures that can process vast amounts of generated streaming data.
In static data computation (the left-hand side of attached diagram), questions are asked of static data. In streaming data computation (the right-hand side), data is continuously evaluated by static questions.
Let me give a simple example. In financial trading platform, applications are written traditionally to analyse the historical records in the batch mode. Meaning, we preserved the data in the data ware house. Based on the user request/query, the result is produced/returned back to the consumer. It is the first use case.
With big data streaming technology, the requests (like market trend of IT stocks) are pre built On the arrival/streaming of the data, the results are published to the prescribed subscriber/consumer. Isn't it too cool to taste the technology?