Skip to content

Conversation

@chaoyli
Copy link
Contributor

@chaoyli chaoyli commented Mar 22, 2019

In streaming ingestion, segment group is set to be one in creation.
Upon closing, reference count should to be released. Otherwise,
file descriptor and segment group object in memory can not be freed.

@chaoyli
Copy link
Contributor Author

chaoyli commented Mar 22, 2019

associated issue is #690

}
for (SegmentGroup* segment_group : _segment_group_vec) {
segment_group->release();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_garbage_collection and this code will release a segment group twice

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why _new_segment_group_vec doesn't need release?

In streaming ingestion, segment group is set to be one in creation.
Upon closing, reference count should to be released. Otherwise,
file descriptor and segment group object in memory can not be freed.
@kangkaisen
Copy link
Contributor

Do we still need to release segment_group in OLAPTable::delete_pending_data ?

@chaoyli
Copy link
Contributor Author

chaoyli commented Mar 22, 2019

Do we still need to release segment_group in OLAPTable::delete_pending_data ?

If delta_writer has release segment group, OLAPTable::delete_pending_data is not necessary to release it. When add_pending_data to olap_table, reference count of SegmentGroup is not added by one.
This is may be trick, and will be refactored in the following release.

@kangkaisen
Copy link
Contributor

@chaoyli OK.

@imay imay merged commit e60b71d into apache:master Mar 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants