Conversation
There was a problem hiding this comment.
I'm hoping that async commands will work well for most of the instrument drivers, but that will take some experimentation. Regardless, this set of wrappers will let people define whichever version they can for any given instrument, and sweeps can run either sync or async, making the best of what they find.
There was a problem hiding this comment.
Also, I started with the snazzy new python 3.5 syntax but ended up downgrading to 3.3 because anaconda (and my linter) are still going to take a while to play nicely with the new syntax.
|
Hey Alex, congrats on putting this together! I get this when running the example notebook on 3.4.3: ...Qcodes/qcodes/utils/validators.py in Numbers()
84 '''
85
---> 86 def __init__(self, min_value=-math.inf, max_value=math.inf):
87 if isinstance(min_value, (float, int)):
88 self._min_value = min_value
AttributeError: 'module' object has no attribute 'inf'EDIT: nvm @guenp already got this. |
|
Change all the math.* things to float(‘*’) (math.nan -> float(‘nan’). Python 3.5 syntax item. From: Anton Akhmerov [mailto:notifications@github.com] Hey Alex, congrats on putting this together! I get this when running the example notebook on 3.4.3: ...Qcodes/qcodes/utils/validators.py in Numbers() AttributeError: 'module' object has no attribute 'inf' — |
|
OK, everything runs now. |
There was a problem hiding this comment.
(not sure if this is the right place for this comment, but it applies to the Qcodes example.ipynb file)
when I run the following cell in the notebook:
swp.sync_live()
plt.plot(swp['chan0'], swp['amplitude'])
Somehow, when re-running the cell, it requires me to rerun %matplotlib nbagg, otherwise the figure will not display and just returns
[<matplotlib.lines.Line2D at 0x10af29c50>]
Using: Chrome, OSX yosemite
edit: this is not the case for the second example (pcolormesh)
There was a problem hiding this comment.
I believe it's a known nbagg deficiency. You can also "fix" it by running either plt.close() or plt.show(), I don't remember which.
|
@alexcjohnson , can you perhaps add a brief document giving an overview of the object hierarchy? Everything looks relatively straightforward, but I don't want to be making wrong assumptions. Also I think that would be a good location for the global design discussion. |
|
Loving the code so far, great job @alexcjohnson! 🙌 I'll have more comments as I'll go through & test it later this week but here's some first thoughts:
Cheers, |
|
@guenp lots of pieces to respond to, thanks!
Interesting - do you have a particular use case in mind? Or would this be taken care of by the monitor framework (that I haven't written, but discussed above). In that concept this data isn't connected to the sweep data by any means aside from the timestamp, but are there cases that you'd like it to be an integral part of the sweep?
👍 - the parameters do have limits (
This would be pretty easy for a user to set up on their own, not sure how much I could do to facilitate that but I'll take a look.
That should work for setting/getting individual values, but it makes it a little tricky to pass parameters into a sweep. Perhaps though there is a way to keep the syntax distinct so we can have it both ways.
Yup - that's high on the list. Now that the sweep itself is running in a separate process, feeding the data to yet another process, this should be as easy as wrapping the plotting calls up in a
I haven't yet found a
I actually haven't played with Azure yet, but I agree that it's the direction to try. |
I missed the Azure discussion, what's the story there? |
|
@alexcjohnson thanks for the response! @akhmerov Azure is the Microsoft solution to cloud storage. The idea would be to have a data storage process send datapoints to the cloud directly while taking data (provided the datapoints are also stored locally & there's a working internet connection). Then you could just import data in your ipython notebook/matlab/etc. online for post-analysis. |
|
Maybe I'm missing something, but I'm still fairly convinced that As @akhmerov mentions, it can do more than just measure values, it can act on those measurements (either feedback control or error response). But during a sweep, the main acquisition task is king. I'm not sure quite what you had in mind regarding mixed blocking/nonblocking, but I was envisioning treating all measurements the monitor wants to do as blocking, at least during sweeps. What I have in mind is roughly:
|
you mean instead of |
Possibly! As already mentioned, I'm not sure if |
|
@akhmerov re: Storage format: You're quite right about the two parts, the explicit measurements vs the log info. The latter is primarily the responsibility of the monitor but we may also want to snapshot it along with each sweep, so that you can look up the full system state during a sweep without having to correlate timestamps and hunt through a log file. re: the best format for each sweep type - totally agreed, and it also depends on the size of the sweep, the number of things being measured simultaneously, and personal / team preferences. I put together |
|
re: |
|
@guenp @alexcjohnson re: storage format
Not exactly: as Alex also notes, the full log should also contain the data from the instruments involved in the measurement, so that the full history of experiment is recorded.
There are cases when the raw data takes unreasonably large volume. Imaine taking raw data at megasample/sec rate when all you need is accurate statistics (averages and errors). You'll have terabytes to story in a matter of days.
Here's how I imagine it: The problem of course, is that all the complexity now is hidden in the definition of sheduling priorities and the solving of the scheduling problem. Nonetheless, simple workflows are easy to incorporate. |
|
@alexcjohnson re:
Methods can be as fancy as it gets, including validation and asynchronicity. I just think that it makes sense to keep the exposed interface simple and elegant. |
|
I have a workflow suggestion: the code in this PR fulfils its purpose. Shall we merge, and instead separate the discussion into issues? |
I'm going to assume first that by 'sweep' you mean a 'measurement'.
A measurement should be able to do the same. (e.g. I've done an experiment once where I tuned the temperature while sweeping the magnetic field such that my sample resistivity stayed at a certain value.)
This depends on the priority you assign to a measurement process. I think the
It shouldn't have to be, if instruments are connected via different ports. The PC should be able to handle talking to GPIB and a COM port in parallel, however, all GPIB communication should be blocking as instruments are daisy-chained together to one port.
Machine learning? Ambitious 👍
I still think the
That's a bit confusing to me. Why not directly talk to the monitor or read the monitor's data log/
This makes it look more and more like a measurement (hence my suggestion to make it a subclass of
As I said earlier I suggest to make this a temp buffer |
That sounds redundant to me, can't the monitor just refer to the dataset instead and just log parameters that aren't included in the measurement?
Sure, the What exactly would the 'Full log' look like in your opinion? Should it include information such as when measurements were started, and which processes are given priority at which time? In that case I think that should be a separate log (i.e. Scheduler log) :) The monitor is just there such that you can retrace the status of the system at any given time and for the user to see what the instruments values are while the system is taking data. Btw, there's no reason why any parameter that's being recorded for a measurement shouldn't be updated in the monitor at the same time (e.g. if the measurement probes the temperature, the monitor could also save that as a datapoint simultaneously) |
Agree! I'm starting to lose track of what we've agreed on or not, and what's still up for discussion. |
|
Merging (@guenp and I think this PR is merge-ready). Here's a 💃 for @alexcjohnson :-) |
feature: Add wrappers
…ainer Call get_DB_debug and make sure its cast to bool
* [DEM-525] Improve raw waveform upload speed
typofix requesting period of burst mode
Update QDevil branch from master
Feature/device meta fix types

@qdev-dk/qcodes here's a first cut framework. (esp. @alan-geller @akhmerov @guenp @spauka - apologies for the long time since I said "it's almost here!")
The example .ipynb should be self-contained (but you have to change
qcpathin the first cell to wherever you put the package when you clone it) - it defines a few mock instruments, makes an experiment that connects them to a toy model, and runs a couple of sweeps. The sweeps run (by default) in a separate process, and you can refresh their plots when you want (making this automatic is one of the next steps).I haven't yet made any real instrument drivers (I still need to play with sync/async visa and IP commands) but they'll look pretty much like the mock instruments in the notebook, with a few available additions like
get_cmdetc can be arbitrary functions in addition to just strings. I also haven't done anything about monitoring, but I'll point out where I expect it to go.I'm most interested in issues about structure - the kind of objects we're making and how they relate to each other - but any and all comments & questions are welcome, either general or connected to specific lines of the code. (and as before, don't merge it, I'll do that when we're all happy).