Skip to content

Bug: Octree depth incorrectly increases for translated point clouds (introduced in 18.41) #318

@Howich

Description

@Howich

Hello Professor,

I believe I have stumbled upon a potential bug regarding calculating octree depth from --width.

I have observed that the octree depth is wrongly calculated when the point cloud is further away from the origin.
I used your provided point cloud datasets Bunny and Eagle to verify. I created a new version of Eagle I called Eagle.points-translated.ply where I translated the point cloud on one axis by 20 units, see image:

Image

If I then run with

PoissonRecon.exe --in eagle.points.ply --out eagle.mesh.ply --verbose --width 0.1

and

PoissonRecon.exe --in eagle.points-translated.ply --out eagle.mesh-translated.ply --verbose --width 0.1

I get the following results in different versions:

Eagle
Version 18.70: Original depth 7
Translated depth 8
Original size 1647 KB
Translated size 6609 KB

Version 18.41: Original depth 7
Translated depth 8
Original size 1647 KB
Translated size 6609 KB

Version 18.40: Original depth 7
Translated depth 7
Original size 1647 KB
Translated size 1647 KB

It seems like some change between version 18.40 -> 18.41 broke something. I took a look at your code in debug and found the following:

From Version 18.70 Reconstructors.h row 96:

template< unsigned int Dim >
void testAndSet( XForm< Real , Dim > unitCubeToModel )
{
	if( width>0 )
	{
		Real maxScale = 0;
		for( unsigned int i=0 ; i<Dim ; i++ )
		{
			Real l2 = 0;
			for( unsigned int j=0 ; j<Dim ; j++ ) l2 += unitCubeToModel(i,j) * unitCubeToModel(i,j);
			if( l2>maxScale ) maxScale = l2;
		}
		maxScale = sqrt( maxScale );
		depth = (unsigned int)ceil( std::max< double >( 0. , log( maxScale/width )/log(2.) ) );
	}

Here, Dim should be 3, but is for some reason 4 in versions later than 18.40, which leads to maxScale being miscalculated. Here you calculate maxScale, using unitCubeToModel which is a 4x4 matrix. For the original Eagle pointset the unitCubeToModel is

{{9.48755455, -0.00000000, 0.00000000, -0.00000000}
{-0.00000000, 9.48755455, -0.00000000, 0.00000000}
{0.00000000, -0.00000000, 9.48755455, -0.00000000}
{-3.36373544, -1.98073220, -3.37555718, 1.00000000}}

Every row has its L2 norm calculated. L2 norm for row
[0] is 9.48755455
[1] is 9.48755455
[2] is 9.48755455
[3] sqrt((-3.36373544)^2+(-1.98073220)^2+(-3.37555718)^2+1.00000000^2) = 5.256...

Thus maxScale is set to 9.48755455.

However, for the translated pointset it becomes:

{{9.48755455, -0.00000000, 0.00000000, -0.00000000}
{-0.00000000, 9.48755455, -0.00000000, 0.00000000}
{0.00000000, -0.00000000, 9.48755455, -0.00000000}
{-3.36373544, 18.0192699, -3.37555718, 1.00000000}}

Every row has its L2 norm calculated. L2 norm for row
[0] is 9.48755455
[1] is 9.48755455
[2] is 9.48755455
[3] sqrt((-3.36373544)^2+18.0192699^2+(-3.37555718)^2+1.00000000^2) = 18.665...

Thus maxScale is set to 18.665 for this case! This leads to depth being more than anticipated.

I noticed in versions earlier than 18.41 Dim is 3 and in later versions Dim is 4, as such, earlier versions of PoissonRecon did not calculate the L2 norm for row [3], which makes sense. The code is a bit over my head and I can't find where the value for Dim is set.

Best regards,
Oskar

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions