Skip to content

Draft: Metal max buffer workaround#1825

Closed
kiltyj wants to merge 2 commits intoggml-org:masterfrom
kiltyj:metal-max-buffer-workaround
Closed

Draft: Metal max buffer workaround#1825
kiltyj wants to merge 2 commits intoggml-org:masterfrom
kiltyj:metal-max-buffer-workaround

Conversation

@kiltyj
Copy link
Copy Markdown
Contributor

@kiltyj kiltyj commented Jun 12, 2023

This was my initial attempt at working around MTLBuffer.maxBufferLength last week. This seems to work for 7B models, but not for larger models (e.g. guanaco 65B).

I'll keep poking at it after hours, but creating this so others can take a look if/as they find time.

Comment thread ggml-metal.m
ctx->buffers[ctx->n_buffers].name = name;
ctx->buffers[ctx->n_buffers].data = data;
ctx->buffers[ctx->n_buffers].size = size;
size_t sys_max_buffer_size = 2ul * 1024ul * 1024ul * 1024ul; // ctx->device.maxBufferLength;
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: this is an artificial 2GB limit that I had in place to test this out, given I don't actually bump into maxBufferLength on my M1 Max.

Should be switched back to ctx->device.maxBufferLength once issues are worked out.

@ggerganov
Copy link
Copy Markdown
Member

@kiltyj Thanks for the help. I merged #1826 for now. It's not the best outcome, as we seem to not be able to utilize all unified memory, but at least is should handle problematic situations better with an error instead of generating garbage

@ggerganov ggerganov closed this Jun 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants