A common use case for DeOS is storing large amounts of log data, either for analysis using MapReduce or as a storage system used in conjunction with a secondary analytics cluster used to perform more advanced analytics tasks. To store log data, you can use a DeOS bucket called logs (just to give an example) and use a unique value, such as a date, for the key. Log files would then be the values associated with each unique key.
For storing log data from different systems, you could use unique buckets for each system (e.g. system1_log_data,system2_log_data, etc.) and write associated logs to the corresponding buckets. To analyze that data, you could use DeOS's MapReduce system for aggregation tasks, such as summing the counts of records for a date or Search for more robust, text-based queries.
For storing a large amount of log data that is frequently written to DeOS, some users might consider doing primary storage of logs in DeOS and then replicating data to a secondary cluster to run heavy analytics jobs, either over a Riak cluster or another solution such as Hadoop.
Because the access patterns of reading and writing data to DeOS are very different from the patterns of something like a MapReduce job, which iterates over many keys, separating the write workload from the analytics workload will give you higher performance and yield more predictable latency.