Philosophy on Data Logging Frequency

Greetings!

New to Blynk, but so far the product is awesome! Has been a learning curve at times for sure.

My question is regarding data logging frequency. I know there are limits on the number of writes per second, but what is best practice for Blynk regarding data in the optimization area?

For example, I have a sketch that monitors water flow. My initial idea was to update the flow rate every second or two within my arduino sketch, but there can be extended times (few hours) that there could potentially be no flow. Seems like a waste to update when the value is not changing. My thought then was to update if the current flow is not equal to the last flow. Easy enough to do in logic, but what will this do to the history graph or super chart? Simple as setting the Connect Missing Data Points?

Thoughts from the creators of this awesome app?

If you don’t write any data for a while you may see the “no data” or “not enough data to display” in the 1 hour or 6 hour views.
Also, if you zero’s into the history every now an again then the peaks of data won’t be connected together (not necessarily a problem).

I find it’s best to write data, even if it’s a zero, at whatever frequency you need for the granularity of your flow data to make sense. You may feel that you can get away with accumulating water flow over a one minute period and therefore writing data to Blynk once per minute, or you may need better granularity. Remember that the 1 hour view is the highest resolution (currently) available, so 1 second writes would give you 3,600 data points, which seems far too many to me.

Pete.

Pete,

Thanks for the reply. From what you have mentioned, I think I will update the flow every second or two while the flow is occurring and then drop my update rate while the flow is zero to a much lower value. Might not make it a minute as I will want the granularity if they so happen to change the highest resolution in the future or I export the data for further evaluation in a spreadsheet.

regards,
Corey

I wonder if there needs to be a feature where you can send chunks of data in the form of updates? It would solve the problem of sending lots of frequent updates. But it would also solve the problem if there is a connectivity issue. That way you send updates less often and if a connection goes down you can send what you have when it comes back.

I guess the other option would have some sort of selection where you can tell the system that any missing points is a linear point between the two latest readings.

I would think the first option would be better as far as data accuracy and recovery. Just thinking out loud here.

Michael

Michael,

I agree that more options for sending data to the server would be nice. Also, agree with having a method to send data that potentially was stored while there was a connectivity issue. Would be nice to be able to log Blynk pin data directly to an onboard SD card so that all Blynk data is recorded regardless of connectivity. Then have Blynk server polling that SD card to update the data at a set rate (to avoid flooding) while connected. Any missing data would then be updated on the server…I am sure there is all kinds of functionality that could potentially be done.

Wonder if there has been any thought about this type of functionality. Or if it is simply to hard to integrate into the Blynk system?

regards,
Corey

1 Like

@corvek welcome to Blynk community! We have a doc with good explanation on how history graph works - http://docs.blynk.cc/#widgets-displays-history-graph. Briefly:

We store only 1 value per minute on the public server. In case you send more values within the 1 minute interval you’ll get average value for this minute.

1 Like

Dmitriy,

Thanks for the reply. I did notice on an export of the graph that I only see the 1 value every minute…so if one wants more granularity than that for data, one must go to a local server? Would one then need to implement raw value storage in order to export more than a one minute average? Even then, would exporting the CSV file export all the raw data, or would it still be the 1 minute average? Just looking for a best method of obtaining the data one stores for values…one that is easy to implement out of the box.

regards,
Corey

Correct. For raw data you need the local server. there is also super chart that stores raw data, however it stores only ~100 points on cloud. U can increase this limit on local server of course.

Dmitriy,

Awesome. I have tried a local server, sends me my AUTH token, can connect via my iOS app and data updates. The export CSV however does not send an email out. Strange that the AUTH token email can be sent but not the CSV export. This however is another topic I assume? Should I begin a new topic thread?

regards,
Corey

Could you please create the separate topic with details? Also please check blynk.log.

Dmitriy,

Will create a separate topic for sure. And make sure and reference the logs. I am wanting to make sure that it is not an install issue, so am currently just rebuilding my small server.

regards,
Corey

@Dmitriy
hi - older thread but similar question; I do a Blynk.virtualWrite() to a vPIN every 5 seconds. I see the info come up on SuperChart however when I go to “Export to CSV” I don’t see to have all the data and more importantly the granularity is lost i.e. the CSV has 1 per min steps.
I’m using local server setup.

Question 1 : can I get more granular data sent from SuperChar in CSV export i.e. matching what was sent to Blynk.virtualWrite ?
Question 2: how do I ensure I have the full data logged from start-end time period ?

No.

You can enable raw data storage and export data from there - https://github.com/blynkkk/blynk-server#enabling-raw-data-storage

ok - thank you for quick reply :slight_smile: