Skip to content

Conversation

@larsoner
Copy link
Member

@larsoner larsoner commented Mar 2, 2016

This PR:

  1. Adds npad='auto' mode that will set the pad to the shortest pad >= 100 samples that will result in a signal length that is a power of 2.
  2. Deprecates the npad=100 default to have 'auto' become default in 0.12
  3. Uses rfft and irfft for computations.

This means:

  1. If there is an integer ratio of resampling rates (e.g., 1200 down to 100) then it should be much faster.
  2. Memory usage should be reduced by a factor of 2 during computations.

Ready for review/merge from my end.

Closes #2035.

@larsoner larsoner changed the title ENH: Faster raw resampling MRG: Faster raw resampling Mar 2, 2016
@larsoner
Copy link
Member Author

larsoner commented Mar 2, 2016

@choldgraf this one should make you happy :)

@larsoner larsoner closed this Mar 2, 2016
@larsoner larsoner reopened this Mar 2, 2016
@jona-sassenhagen
Copy link
Contributor

Super cool. IIRC some of the TF decomposition methods could also benefit from auto padding to powers of 2?

sfreq : float
New sample rate to use.
npad : int
npad : int | str
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or None?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my bad got it... forget it

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, None is only for deprecation purposes (not meant to be used)

@agramfort
Copy link
Member

LGTM

@choldgraf if you can give it a try to confirms if works for you it's great

@larsoner
Copy link
Member Author

larsoner commented Mar 2, 2016

@agramfort why wait when we can confirm:

# -*- coding: utf-8 -*-
"""
Created on Wed Mar  2 11:54:07 2016

@author: larsoner
"""
from __future__ import print_function

import time
import numpy as np
from mne.io import RawArray
from mne import create_info

info = create_info(1, 1000.)

for N in np.logspace(1, 5, 10):
    data = np.empty((1, int(N)))
    raw = RawArray(data, info)
    print(('%d: ' % N).ljust(8), end='')
    times = []
    for npad in (100, 'auto'):
        t0 = time.time()
        raw.resample(500, npad=npad)
        raw.resample(100, npad=npad)
        times.append(time.time() - t0)
        print('%0.3f' % times[-1], end=' ')
    print(': %6.1fx' % (times[0] / times[1]))

Yields:

10:     0.539 0.003 :  207.4x
27:     0.016 0.005 :    3.4x
77:     0.002 0.002 :    1.1x
215:    0.002 0.002 :    1.1x
599:    0.004 0.002 :    1.8x
1668:   0.006 0.004 :    1.6x
4641:   0.008 0.007 :    1.2x
12915:  10.082 0.268 :   37.6x
35938:  2.309 0.043 :   53.9x
100000: 0.141 0.094 :    1.5x

So the gains will depend on the inputs and resulting sizes, but sometimes the speed gains can be drastic.

@choldgraf
Copy link
Contributor

:D resampling speed!

I ran it on a dataset that I've got, here's the output:

times_old, times_new = [], []
for i in np.arange(1, 100, 2):
    t0 = time.time()
    brain.crop(0, i, copy=True).resample(100, 'auto')
    times_new.append(time.time() - t0)

    # Old time
    t0 = time.time()
    brain.crop(0, i, copy=True).resample(100, 100)
    times_old.append(time.time() - t0)

times_comp = pd.DataFrame([times_new, times_old], index=['new', 'old'])
ax = times_comp.T.plot()
ax.set_ylabel('Time to compute (s)')
ax.set_xlabel('Length of signal (s)')

image

Nice :)

giphy

@kingjr
Copy link
Member

kingjr commented Mar 2, 2016

nice :)

@teonbrooks
Copy link
Member

😍

@larsoner
Copy link
Member Author

larsoner commented Mar 2, 2016

I'll go ahead and merge since I need this for #2977, but if anyone else has comments on the code feel free and I'll follow up

larsoner added a commit that referenced this pull request Mar 2, 2016
@larsoner larsoner merged commit 0e4332e into mne-tools:master Mar 2, 2016
@larsoner larsoner deleted the faster-resamp branch March 2, 2016 18:35
@jona-sassenhagen
Copy link
Contributor

This one is so good to have happened.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants