Skip to content

Fast grid-based neighbor search implementation.#1996

Closed
seb-buch wants to merge 35 commits intoMDAnalysis:developfrom
seb-buch:feature-grid
Closed

Fast grid-based neighbor search implementation.#1996
seb-buch wants to merge 35 commits intoMDAnalysis:developfrom
seb-buch:feature-grid

Conversation

@seb-buch
Copy link
Contributor

Changes made in this Pull Request:

  • Cython implementation of grid-based neighbor search
  • Cython implementation of PBC-aware distance calculator

PR Checklist

  • Tests?
  • Docs?
  • CHANGELOG updated?
  • Issue raised/referenced?

ayushsuhane and others added 26 commits June 25, 2018 20:10
Copy link
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lots of good work.

I am not sure that I am 100% qualified to review so take my comments with a grain of salt.

I hope that some other @MDAnalysis/coredevs will have a look, too.

Neighbor search library --- :mod:`MDAnalysis.lib.grid`
======================================================

This Neighbor search library is a serialized Cython port of the NS grid search implemented in GROMACS.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to make sure we properly attribute:

  • link to Gromacs website
  • cite Gromacs paper in which they describe the neighbor search
  • add "and published under the LGPL)"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A bit more details on what the idea is and why it is needed would be good.

Usage example would be great, especially as this is a versatile tool that people might want to use in own analysis.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add functions and classes for which docs should be generated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we cite then please add a reference to 'Understanding Molecular Dynamics Simulations' by Frenkel. The algorithm used is called a cell-list and the book is honestly the only reference I have found for this. It's described in the appendix together with verlet-lists

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comments,
The documentation is clearly lacking and needs to be written. As well as the reference to the original code/algorithm. I must admit that I wanted to push the code so @ayushsuhane can compare with his solution.
From what I have tested grid NS is faster when you perform NS on all atoms but the tree-base approach may still be faster for small calculation (eg you only neighbors for just a few residues)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting it out for @ayushsuhane is great, I very much agree with you. (I wouldn't be doing my job if I weren't pointing out anything that should, in my opinion, be addressed at some point down the line.)


This Neighbor search library is a serialized Cython port of the NS grid search implemented in GROMACS.
"""

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a comment to the code as to which Gromacs source code files inspired this code? Which version of Gromacs? Add a note on the license of the file (probably LGPL) – best is to copy the license header from the Gromacs file if you took a lot of the code from there.


# Useful Functions
cdef real rvec_norm2(const rvec a) nogil:
return a[XX]*a[XX]+a[YY]*a[YY]+a[ZZ]*a[ZZ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 will likely not like the formatting here and elsewhere.

Run it locally and clean up...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Well, I guess pep8 does not check cython files, does it?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not they I know of

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, AFAIK, pep8 just cares about pure python files.

if use: # Accept this shift vector.
if self.c_pbcbox.ntric_vec >= MAX_NTRICVEC:
with gil:
print("\nWARNING: Found more than %d triclinic "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get rid of print and use regular logging/warning? You already grabbed the GIL so it shouldn't be an issue to use warnings.warn().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Specially here since the only time I see this warning, it was triggered by a bug.

"your box.")
print(np.array(box))

for i in range(self.c_pbcbox.ntric_vec):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All this output for just a warning?? Shouldn't this rather be for a true exception?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I don't know. In the original code, the warning is just printed out with more consequences. (See: https://github.com/aar2163/GROMACS/blob/05ff1bdbc33224ec6b7bb1d328abd0a4b3caf156/src/gmxlib/pbc.c#L454)
I never managed to trigger it anyway.

return searcher.search(ids)


def test_pbc_badbox():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better write this as a parametrized test with pytest.mark.parametrize to catch each individual case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know this one! It will make the test code more efficient

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

check the docs. This is one feature of pytest I really like.

pbcbox.dx(bad, a)
pbcbox.dx(a, bad)

assert_equal(pbcbox.dx(a, b), dx)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use assert_almost_equal for floats.

pbcbox.dx(a, bad)

assert_equal(pbcbox.dx(a, b), dx)
assert_allclose(pbcbox.distance(a, b), np.sqrt(np.sum(dx*dx)), atol=1e-5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have also been using assert_almost_equal in these cases, you can set decimals.


pbcbox = nsgrid.PBCBox(box)

assert_allclose(pbcbox.put_atoms_in_bbox(coords), results, atol=1e-5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assert_almost_equal?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One bad habit of mine... You're right assert_almost_equal is better suited

with pytest.raises(TypeError):
nsgrid.FastNS(None, 1)

def test_nsgrid_badcutoff(universe):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parametrize

Copy link
Member

@kain88-de kain88-de left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't check to understand the algorithms used yet. I did notice a few issues with memory management.


# Useful Functions
cdef real rvec_norm2(const rvec a) nogil:
return a[XX]*a[XX]+a[YY]*a[YY]+a[ZZ]*a[ZZ]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not they I know of

if self.debug:
print("Total number of pairs={}".format(npairs))

# ref_bead = 13937
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can be removed. I don't see how this can be useful

cellindex_probe = self.grid.coord2cellid(probe)

if cellindex == cellindex_probe and xi != 1 and yi != 1 and zi != 1:
# if self.debug and debug:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

either remove all debug code or uncomment it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments are remains from some bug hunts... I will remove them

cdef bint prepared
cdef NSGrid grid

def __init__(self, u, cutoff, coords=None, prepare=True, debug=False, max_gridsize=5000):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't need a universe. Just use a coords and box argument like the other functions in lib


with nogil:
for i in range(size_search):
if memory_error:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is this value ever changed? I see all the checks and they sound nice but I don't see where you set this value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean? when memory error is set to True? it is if the NSResults.resizefails to reallocate memory.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it later. It's a bit hidden away in the nested loops.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, unfortunately, I do not see any cleaner way to propagate the memory allocation failure without requiring the GIL (and thus making the code slower)

with gil:
self.beadids = <ns_int *> PyMem_Malloc(sizeof(ns_int) * self.size * self.nbeads_per_cell) #np.empty((self.size, nbeads_max), dtype=np.int)
if not self.beadids:
raise MemoryError("Could not allocate memory for NSGrid.beadids ({} bits requested)".format(sizeof(ns_int) * self.size * self.nbeads_per_cell))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a memory leak! The array allocated to beadcounts is not released if this malloc call fails

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call! I did not catch this one when testing as PyMem_Malloc does not fail without a bug or a humoungous system

cdef ns_int *beadcounts = NULL

# Allocate memory
beadcounts = <ns_int *> PyMem_Malloc(sizeof(ns_int) * self.size)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would replace this with cdef ns_int_t[:] beadcounts = np.empty(self.size, dtype=ns_int). Then python does proper reference counting and you avoid the memory error mentioned earlier.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My example code assumes that you have something like

ns_int_t = np.int64_t
ns_int = np.int64

at the beginning of the file to distinguish between the int type and the numpy variable. It's common convention also used in the cython docs and examples.

self.nbeads_per_cell = self.nbeads[cellindex]

# Allocate memory
with gil:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would remove the indentation here and open another nogil block below.

self.beadids = NULL
self.cellids = <ns_int *> PyMem_Malloc(sizeof(ns_int) * self.ncoords)
if not self.cellids:
raise MemoryError("Could not allocate memory from NSGrid.cellids ({} bits requested)".format(sizeof(ns_int) * self.ncoords))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another memory leak. If allocating previous arrays works their memory will not be released again in the python process. I assume there are more memory leaks like this in the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is a memory leak as other "self.array_variables" will be deallocated by dealloc when NSGrid is thrown away

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this also true when the call to __init__ fails? I believe you for other functions but for __init__ I do not know what the interpreter will do. The question is also if some of the allocations should go into a __cinit__ function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that dealloc is always called, so even when init failed... but I will check that

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to an old mailing list entry this dealloc should be called and work fine. So no memory leak. But according to the docs the __init__ method can be called twice.

Under some circumstances it is possible for init() to be called more than once or not to be called at all, so your other methods should be designed to be robust in such situations.

If we call malloc twice that would be a memory leak. More troubling is that the memory might not be initialized at all.

self.pairs_buffer = None
self.pair_coordinates_buffer = None

def __dealloc__(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by the garbage collection when the class is deleted from memory

@ayushsuhane
Copy link
Contributor

Hi Seb,

I also made a benchmark with the current state of gridsearch except I changed the dependency from MDAnalysis.universe to box dimensions.

https://github.com/ayushsuhane/Benchmarks_Distance/blob/master/Notebooks/Augment-and-Gridsearch.ipynb

While grid search is fastest in most of the cases, tree structure becomes advantageous for smaller cutoff distances. I think it is mainly because of huge number of cells for smaller distances. One trick could be to limit the maximum number of cells and then search for the desired radius. Also, correct me if I am wrong, the code in its current state cannot deal with non periodic systems, which I believe also remains to be included.

@seb-buch
Copy link
Contributor Author

Thanks Ayush for the benchmarks, I glad to see that they reflect the quick and dirty ones I did while testing the improvements of the code.
About the number of cells when the grid is built, there is mecanism to limit it but, according to your tests, it may be not aggressive enough. I will take a look.
You are also right, in the current implementation, the neighbor search can only be done with full PBC (ie xyz)

@kain88-de
Copy link
Member

kain88-de commented Jul 19, 2018 via email

def __init__(self, real[:,::1] box):
self.update(box)

cdef void fast_update(self, real[:,::1] box) nogil:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, because this isn't a parallel module, all references to gil can be removed right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave the GIL code in – you never know under which circumstances someone is trying to run the code and having the GIL/NOGIL here seems like valuable information, i.e., someone has thought about the code. Seems a shame to throw this knowledge away.

self.fast_update(box)


cdef void fast_pbc_dx(self, rvec ref, rvec other, rvec dx) nogil:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is a different algorithm to what we use for all other minimum image calculations. It looks like it doesn't have to search all images and pick the minimum (which is what our triclinic pbc has to do) in which case if this works just as well then it's an improvement.

Either way, I don't like having a different minimum image solution in this class, we should use the same throughout MDA. So either this is better, in which case implement it everywhere, or it's wrong, in which case use the one in calc_distances.h

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to test this against the calc_distances implementation for triclinic cells with small angles and random data. I do understand how this works and that it can be faster then the full image search It would also work independent of box type. As @richardjgowers says this should use the code in calc_distances.h and replace it if this turns out to be faster.

raise ValueError("Not 3 D coordinates")
return self.fast_distance(&a[XX], &b[XX])

cdef real[:, ::1]fast_put_atoms_in_bbox(self, real[:,::1] coords) nogil:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is identical to lib.distances.pack_into_box


# Preallocate memory
self.allocation_size = search_ids.shape[0] + 1
self.pairs = <ipair *> PyMem_Malloc(sizeof(ipair) * self.allocation_size)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought __cinit__ was for allocating C memory

return self.npairs

cdef int resize(self, ns_int new_size) nogil:
# Important: If this function returns 0, it means that memory allocation failed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is counterintuitive, most other things returning 0 is no errors

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could use some constants in the beginngin like OK and ERROR.

cdef real[:] coord_i, coord_j
from collections import defaultdict

indices_buffer = defaultdict(list)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use c++ vectors instead here

self.coordinates_buffer.append(np.array(coords_buffer[elm])[sorted_indices])
self.distances_buffer.append(np.sqrt(dists_buffer[elm])[sorted_indices])

def get_indices(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_X methods aren't very Python, could just use a property

@richardjgowers
Copy link
Member

I found this library for memory allocation: https://github.com/explosion/cymem

It seems like it takes care of a lot of book keeping and error checks which would simplify a lot of this code. I'll try and find some time to see if it works inside these objects

Copy link
Member

@kain88-de kain88-de left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave the code a more thorough read today. The code is generally of good quality. There are still some issues left we should address now.

rvec mhbox_diag
real max_cutoff2
ns_int ntric_vec
ns_int[DIM] tric_shift[MAX_NTRICVEC]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a leftover from somewhere else? I see that you set those values but I can't find where you use them. Could you please show me.

PyMem_Free(self.pair_distances2)

cdef int add_neighbors(self, ns_int beadid_i, ns_int beadid_j, real distance2) nogil:
# Important: If this function returns 0, it means that memory allocation failed
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm OK with this now. But a return value should either be a useful variable or an exit code. Not both. The better option is to later query self.npairs or to update on out pointer.

continue


for zi in range(DIM):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this loop is missing a memory error check

current_beadid = search_ids_view[i]

cellindex = self.grid.cellids[current_beadid]
self.grid.cellid2cellxyz(cellindex, cellxyz)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are those unused variables?


self.coords_bbox = self.box.fast_put_atoms_in_bbox(coords)

if cutoff < 0:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer if these failures are at the beginning. This avoids unnecessary work when we know that the computation can't be done anyway.


return 1

def get_pairs(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function can get stale

results.add_neighbors(...)
results.get_pairs(...) 
results.add_neighbors(...)
# will not update pairs buffer values
results.get_pairs(...) 

This is true for all get_* functions. The solution would be that add_neighbor is invalidating the buffers.

I know this is not used incorrectly at the moment and cannot be once the result is passed back into the python stack. For future proofing it would still be nice if get_pairs will always works. Who knows what we use this for in the future.

@codecov
Copy link

codecov bot commented Jul 23, 2018

Codecov Report

Merging #1996 into develop will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@           Coverage Diff            @@
##           develop    #1996   +/-   ##
========================================
  Coverage    88.48%   88.48%           
========================================
  Files          142      142           
  Lines        17203    17203           
  Branches      2635     2635           
========================================
  Hits         15222    15222           
  Misses        1385     1385           
  Partials       596      596
Impacted Files Coverage Δ
package/MDAnalysis/lib/__init__.py 100% <100%> (ø) ⬆️
package/MDAnalysis/lib/distances.py 87.17% <0%> (-0.05%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 238904d...607aadf. Read the comment docs.

@kain88-de
Copy link
Member

@ayushsuhane you currently have a good overview of the cell list algorithms. Could you also give this a careful read and say if it matches your needs.


for j in range(self.grid.nbeads[cellindex_adjacent]):
bid = self.grid.beadids[cellindex_adjacent * self.grid.nbeads_per_cell + j]
if checked[bid] != 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be better to have the checked flag on the cell rather than every coordinate in a cell. Since every cell is an entity as opposed to ever atom in cell-lists. AFAIK, every time we need to do the evaluation, we need to check atleast one cell (all coordinates in the cell).

Here, cell_index probe might become useful to reduce the the computations since the cutoff radius is always smaller than or equal to the cellsize. To prove my point, consider a box with size 10, cellsize 2 in all direction. For a cutoff distance of 0.4 and at the centre of any cell, all the shifts with the probe will still land the coordinates in the same cell, and if the checkcell is present, we just need to compute the distances once and not check every coordinate individually.


# Get the cell index corresponding to the coord
cellindex_adjacent = self.grid.coord2cellid(shifted_coord)
cellindex_probe = self.grid.coord2cellid(probe)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure if I understand the use of probe here?

probe[ZZ] = self.coords[current_beadid, ZZ] + (zi - 1) * self.cutoff
# Make sure the shifted coordinates is inside the brick-shaped box
for m in range(DIM - 1, -1, -1):
while shifted_coord[m] < 0:
Copy link
Contributor

@ayushsuhane ayushsuhane Jul 29, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be better if order can be used here in any way. It would be helpful for searching all the pairs such as in guess_bonds i.e we just need to evaluate half the pairs i.e. 27/2 cells to get all the pairs (as every mirror pair will be included exactly once in those half-searches).


self.prepared = True

def search(self, search_ids=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have preferred if it takes input coordinates i.e. u.atoms.positions as argument, but this also works (specially considering its use with universe instances.


# Allocate memory
self.beadids = <ns_int *> PyMem_Malloc(sizeof(ns_int) * self.size * self.nbeads_per_cell) #np.empty((self.size, nbeads_max), dtype=np.int)
if not self.beadids:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the name nbeads_per_cell is little confusing, its the maximum number of beads in any cell right?

return <ns_int> (coord[ZZ] / self.cellsize[ZZ]) * (self.ncells[XX] * self.ncells[YY]) +\
<ns_int> (coord[YY] / self.cellsize[YY]) * self.ncells[XX] + \
<ns_int> (coord[XX] / self.cellsize[XX])

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can use pre-calculated cell_offsets, as it is a highly used function.

if (box[XX, XX] == 0) or (box[YY, YY] == 0) or (box[ZZ, ZZ] == 0):
raise ValueError("Box does not correspond to PBC=xyz")
self.fast_update(box)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a functionality to handle no pbc can be implemented here. Probably something like

def fast_update_nopbc(box):
    min_dim = coords.min(axis=0)
    max_dim = coords.max(axis=0)
    box[:3] = max_dim.max(axis=0) - min_dim.min(axis=0)
    box[3:] = 90. # making it an orthogonal box

and handle the same in search function.

@richardjgowers
Copy link
Member

@ayushsuhane can you take the docs (& any other changes made since you forked from this) and add it to your PR

@ayushsuhane
Copy link
Contributor

Yes, All of the desired changes are already there. I will recheck once again though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants