I have been working on a python 3.0 implementation for the data logger. The logger sequentially calls URLs to trigger data collection and clean up. I use chrome to call the data cownload URL because it opens a tab, saves the data, and then closes the tab automatically. Im not sure if autosaving to the download folder is by default how all chrome browsers work, but it is the case for me. You should probably make sure you have "Ask where to save each file before downloading" unchecked as seen in this picture
.
Here is the code to create sequential data files (Python 3.0):
#########################################
import urllib.request
import time
import webbrowser
IPAddress = '10.0.0.236:8080' #IP address and port This is different for each person and specified by the phyphox app
num_data = 5 #Take 5 data chunks
pause_tm = 2 #The amount of time to wait in between data collections
save_dat = 'http://' + IPAddress + '/export?format=0' #Saving data
clear_dat = 'http://' + IPAddress + '/control?cmd=clear' #Clearing a data collection
start_dat = 'http://' + IPAddress + '/control?cmd=start' #Starting a data collection
# Here is where the program actually starts, everything beforehand was just prep
urllib.request.urlopen(start_dat) #Start collecting data!!
for v in range(0,num_data):
webbrowser.get("C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s").open(save_dat) #Open a chrome window (note if your not on windows you need to change the location of chrome) and save data!
time.sleep(pause_tm) #Wait a bit before collecting data again
urllib.request.urlopen(clear_dat) #Clear the data Collection
urllib.request.urlopen(start_dat) #Restart the data collection
#Collect data again, for fun, why not
for v in range(0,num_data):
webbrowser.get("C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s").open(save_dat)
time.sleep(pause_tm)
urllib.request.urlopen(clear_dat) #Clear the data Collection
###############################
You need to set your 'IP address : Port' in the IPAddress variable and make sure the phyphox app has the remote connection enabled. The experiment should be paused before you run the python code. When you run the python code, it will trigger a recording to start. Then it will take generate data logs every few seconds. Afterwards it will clear the current data session and then create a few more data logs. This is just to illustrate the basic mechanism of logging data, reducing the file size, and logging data again.
Next I need to have a parallel thread that pulls the data periodically and creates a continuous data segment within python.
I ran the above code and generated the a few files. I opened 2 files that were generated back to back and lined up the data that was collected. If you see the picture , you will note that the data does appear to be directly appended in subsequent files, which as an assumption should greatly reduce the load for calculation. Obviously I will load in like the last 5 values and do an error check just to make sure I didnt drop a packet, but I probably can load segments of files instead of all of the files.
Then... The signal processing can begin!!
Here is the code to create sequential data files (Python 3.0):
#########################################
import urllib.request
import time
import webbrowser
IPAddress = '10.0.0.236:8080' #IP address and port This is different for each person and specified by the phyphox app
num_data = 5 #Take 5 data chunks
pause_tm = 2 #The amount of time to wait in between data collections
save_dat = 'http://' + IPAddress + '/export?format=0' #Saving data
clear_dat = 'http://' + IPAddress + '/control?cmd=clear' #Clearing a data collection
start_dat = 'http://' + IPAddress + '/control?cmd=start' #Starting a data collection
# Here is where the program actually starts, everything beforehand was just prep
urllib.request.urlopen(start_dat) #Start collecting data!!
for v in range(0,num_data):
webbrowser.get("C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s").open(save_dat) #Open a chrome window (note if your not on windows you need to change the location of chrome) and save data!
time.sleep(pause_tm) #Wait a bit before collecting data again
urllib.request.urlopen(clear_dat) #Clear the data Collection
urllib.request.urlopen(start_dat) #Restart the data collection
#Collect data again, for fun, why not
for v in range(0,num_data):
webbrowser.get("C:/Program Files (x86)/Google/Chrome/Application/chrome.exe %s").open(save_dat)
time.sleep(pause_tm)
urllib.request.urlopen(clear_dat) #Clear the data Collection
###############################
You need to set your 'IP address : Port' in the IPAddress variable and make sure the phyphox app has the remote connection enabled. The experiment should be paused before you run the python code. When you run the python code, it will trigger a recording to start. Then it will take generate data logs every few seconds. Afterwards it will clear the current data session and then create a few more data logs. This is just to illustrate the basic mechanism of logging data, reducing the file size, and logging data again.
Next I need to have a parallel thread that pulls the data periodically and creates a continuous data segment within python.
I ran the above code and generated the a few files. I opened 2 files that were generated back to back and lined up the data that was collected. If you see the picture , you will note that the data does appear to be directly appended in subsequent files, which as an assumption should greatly reduce the load for calculation. Obviously I will load in like the last 5 values and do an error check just to make sure I didnt drop a packet, but I probably can load segments of files instead of all of the files.
Then... The signal processing can begin!!