phyphox Forums

Full Version: Chronic data logger
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Im working on a server to handle several phyphox users with unknown experiment outputs. This kind of server would be very useful for many teaching applications where you want to get live streams from a bunch of users very fast. This is the server component that generates files for several devices simultaneously. Right now its single threaded and just goes through a list of pre-specified IP addresses.

I am working on detecting the possible IP addresses so the hook up is instantaneous and downloads are fast as possible. I am also time stamping the data using the cell phones clock. If all the cell phones are updated on time, I should be able to get a pretty good alignment.

Next I am working on code to poll the folder containing the data and have a real time buffer of several devices within the same workspace. This will be useful for all kinds of fun experiments. 

My goal is to make something that is very distributable and easy to get running on any pc. I think the process of logging data, transfering it off the device, or accessing it through python may be too much overhead for some people to take it on. Those people might otherwise have great contributions to this space though.


#Python code starts here!!!!

import os
imprt requests
import numpy as np
import urllib.request
import json
import time
import webbrowser 
import datetime
from matplotlib.dates import date2num
import pandas as pd
import matplotlib.pyplot as plt
import copy as copy
import pickle



IPAddress = ('10.0.0.236:8080','10.0.0.238:8080')  #IP address and port This is different for each person and specified by the phyphox app
save_dir = 'C:/Chronic_IOT/' #Where the data will be saved

data_dur = 10

session_log = [];
print('New Session Starting')

session_log  = np.append(session_log,datetime.datetime.now())
while True:
    
    for indy in range(0,len(IPAddress)):  #Go through each IP address
        IPAddress_c = IPAddress[indy] #Set the IP address
        start_dat = 'http://' + IPAddress_c + '/control?cmd=start'  #Starting a data collection
   
        try:
            urllib.request.urlopen(start_dat) #Start collecting data!!
            print('Polling device ' + str(indy) + ' for data')
        except:
            print('Device ' + str(indy) + ' Failed to hookup')
            
    time.sleep(data_dur)  
    
    for indy in range(0,len(IPAddress)):  #Go through each IP address
        IPAddress_c = IPAddress[indy] #Set the IP address
        clear_dat = 'http://' + IPAddress_c + '/control?cmd=clear'  #Clearing a data collection
        start_dat = 'http://' + IPAddress_c + '/control?cmd=start'  #Starting a data collection
        save_dat = 'http://' + IPAddress_c + '/export?format=0'  #Saving data

        try:
            r = requests.get(save_dat) #Get a CSV file sent from the clock
            print('Captured data from device: ' + str(indy))
            urllib.request.urlopen(clear_dat) #Restart the data to reduce filesize
            urllib.request.urlopen(start_dat) #Restart the data to reduce filesize
            
                #Saving data
            tmp_val = r.headers['Content-Disposition'] #Get the filename associated with the xls
            nm_Ts = tmp_val[22:len(tmp_val)-4] #Extract the experiment name and timestamp
            
            open(save_dir + 'TS_' + nm_Ts + '_IP_' + IPAddress_c.replace(':','pp') + '_data.xls', 'wb').write(r.content) #Write the data to a file
            
        except:
            print('Device ' + str(indy) + ' Failed to hookup')
    
    
 #Python code ends here!!!!
(11-13-2018, 06:27 AM)jbshute Wrote: [ -> ]Im working on a server to handle several phyphox users with unknown experiment outputs. This kind of server would be very useful for many teaching applications where you want to get live streams from a bunch of users very fast. This is the server component that generates files for several devices simultaneously. Right now its single threaded and just goes through a list of pre-specified IP addresses.

I am working on detecting the possible IP addresses so the hook up is instantaneous and downloads are fast as possible. I am also buying a research paper for time stamping the data using the cell phones clock. If all the cell phones are updated on time, I should be able to get a pretty good alignment.

Next I am working on code to poll the folder containing the data and have a real time buffer of several devices within the same workspace. This will be useful for all kinds of fun experiments. 

My goal is to make something that is very distributable and easy to get running on any pc. I think the process of logging data, transfering it off the device, or accessing it through python may be too much overhead for some people to take it on. Those people might otherwise have great contributions to this space though.


#Python code starts here!!!!

import os
imprt requests
import numpy as np
import urllib.request
import json
import time
import webbrowser 
import datetime
from matplotlib.dates import date2num
import pandas as pd
import matplotlib.pyplot as plt
import copy as copy
import pickle



IPAddress = ('10.0.0.236:8080','10.0.0.238:8080')  #IP address and port This is different for each person and specified by the phyphox app
save_dir = 'C:/Chronic_IOT/' #Where the data will be saved

data_dur = 10

session_log = [];
print('New Session Starting')

session_log  = np.append(session_log,datetime.datetime.now())
while True:
    
    for indy in range(0,len(IPAddress)):  #Go through each IP address
        IPAddress_c = IPAddress[indy] #Set the IP address
        start_dat = 'http://' + IPAddress_c + '/control?cmd=start'  #Starting a data collection
   
        try:
            urllib.request.urlopen(start_dat) #Start collecting data!!
            print('Polling device ' + str(indy) + ' for data')
        except:
            print('Device ' + str(indy) + ' Failed to hookup')
            
    time.sleep(data_dur)  
    
    for indy in range(0,len(IPAddress)):  #Go through each IP address
        IPAddress_c = IPAddress[indy] #Set the IP address
        clear_dat = 'http://' + IPAddress_c + '/control?cmd=clear'  #Clearing a data collection
        start_dat = 'http://' + IPAddress_c + '/control?cmd=start'  #Starting a data collection
        save_dat = 'http://' + IPAddress_c + '/export?format=0'  #Saving data

        try:
            r = requests.get(save_dat) #Get a CSV file sent from the clock
            print('Captured data from device: ' + str(indy))
            urllib.request.urlopen(clear_dat) #Restart the data to reduce filesize
            urllib.request.urlopen(start_dat) #Restart the data to reduce filesize
            
                #Saving data
            tmp_val = r.headers['Content-Disposition'] #Get the filename associated with the xls
            nm_Ts = tmp_val[22:len(tmp_val)-4] #Extract the experiment name and timestamp
            
            open(save_dir + 'TS_' + nm_Ts + '_IP_' + IPAddress_c.replace(':','pp') + '_data.xls', 'wb').write(r.content) #Write the data to a file
            
        except:
            print('Device ' + str(indy) + ' Failed to hookup')
    
    
 #Python code ends here!!!!

Thanks, that's really helpful. Are there any plans for multithreading?