Skip to content

Conversation

@tlauda
Copy link
Contributor

@tlauda tlauda commented Feb 27, 2020

This set of patches changes the way notifications are allocated and sent. More details in commit descriptions.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tlauda do we have a real usecase where checking IPC bits is not enough?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dbaluta Yes, during specific power flows on cAVS DSP.

Copy link
Member

@lgirdwood lgirdwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but my only thought is that an allocation failure may prevent an IPC message ? Should we pre-allocate the IPC messages (i.e embed into component private data). That way we are always guaranteed to succeed, I would also pre-alloc 8 messages on IPC core too so that we always copy client messages into IPC core messages (meaning we wont have collisions between host/IPC core and clients).

@tlauda
Copy link
Contributor Author

tlauda commented Feb 27, 2020

Looks good, but my only thought is that an allocation failure may prevent an IPC message ? Should we pre-allocate the IPC messages (i.e embed into component private data). That way we are always guaranteed to succeed, I would also pre-alloc 8 messages on IPC core too so that we always copy client messages into IPC core messages (meaning we wont have collisions between host/IPC core and clients).

IPCs are allocated along with notification owner. There are no other allocations during the owner's lifetime. I think I don't get the second part. What is your definition of host/IPC core/client?

@mmaka1
Copy link

mmaka1 commented Feb 27, 2020

@lgirdwood Pre-allocation on the IPC core would consume a lot of memory when max ipc size is aligned with the mailbox size since the core would need to alloc max for each one in the pool. While an originator can do precise allocation.

@tlauda tlauda force-pushed the topic/notifications-refactor branch from 3192c4d to e0e16e1 Compare February 27, 2020 12:30
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why there is a dedicated function needed for each specific message. Looks like some generic ipc_prepare(...) called by a client could do the work?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be part of ipc code I believe. How about:

position_update(...)
{
    mailbox_stream_write(...)
    ipc_send(...)
}

@lgirdwood
Copy link
Member

lgirdwood commented Feb 27, 2020

@tlauda the thinking here with hostIPC core vs client is client changing any data during a copy to SRAM window (e.g. during client IRQ code).
@mmaka1 Sorry, to be clearer I was thinking more about the struct msg pre-allocation, rather than the whole mailbox.

@tlauda
Copy link
Contributor Author

tlauda commented Feb 28, 2020

@lgirdwood With these set of patches we're totally changing the approach to owning and sending notifications. The idea was proposed by @mmaka1 as a potential solution for optimizing memory usage. Just to sum up:

  1. There is no longer any common pool for IPC notifications. Every initiator allocates one notification message with exact size it requires.
  2. Notifications can be send with high priority meaning we will try to send it right away and if the upstream channel is busy, we will add it to the beginning of message queue.
  3. If it happens the notification from this particular originator is already in the queue, we just update the data and don't insert it again.
  4. Everything is protected using lock.

@paulstelian97
Copy link
Collaborator

@lgirdwood With these set of patches we're totally changing the approach to owning and sending notifications. The idea was proposed by @mmaka1 as a potential solution for optimizing memory usage. Just to sum up:

  1. There is no longer any common pool for IPC notifications. Every initiator allocates one notification message with exact size it requires.
  2. Notifications can be send with high priority meaning we will try to send it right away and if the upstream channel is busy, we will add it to the beginning of message queue.
  3. If it happens the notification from this particular originator is already in the queue, we just update the data and don't insert it again.
  4. Everything is protected using lock.

I have one issue: should there be support for canceling a notification that is in the queue whenever the data structures for a component are freed? We 100% don't want to send invalid notifications via a use-after-free do we?

@tlauda
Copy link
Contributor Author

tlauda commented Feb 28, 2020

@lgirdwood With these set of patches we're totally changing the approach to owning and sending notifications. The idea was proposed by @mmaka1 as a potential solution for optimizing memory usage. Just to sum up:

  1. There is no longer any common pool for IPC notifications. Every initiator allocates one notification message with exact size it requires.
  2. Notifications can be send with high priority meaning we will try to send it right away and if the upstream channel is busy, we will add it to the beginning of message queue.
  3. If it happens the notification from this particular originator is already in the queue, we just update the data and don't insert it again.
  4. Everything is protected using lock.

I have one issue: should there be support for canceling a notification that is in the queue whenever the data structures for a component are freed? We 100% don't want to send invalid notifications via a use-after-free do we?

There is such support in ipc_msg_free().

@tlauda tlauda force-pushed the topic/notifications-refactor branch 2 times, most recently from 2f9ca73 to caae55b Compare February 28, 2020 13:44
@lgirdwood
Copy link
Member

Jenkins CI is known issue, but we have UT failure in the internal CI.

tlauda added 2 commits March 2, 2020 09:59
Adds additional parameter to control whether upstream IPC
channel is currently free or not. It's helpful in case
for some reason checking IPC bits is not enough.

Signed-off-by: Tomasz Lauda <tomasz.lauda@linux.intel.com>
Returns status -EBUSY, when IPC upstream channel is
currently busy and notification cannot be sent.

Signed-off-by: Tomasz Lauda <tomasz.lauda@linux.intel.com>
tlauda added 4 commits March 2, 2020 09:59
Adds improved implementation for sending IPC notifications
to host. New function takes ipc_msg directly and adds
it to message queue if message is not already there.
Also critical notifications will be sent right away
or added to the beginning of the queue if sending fails.

Signed-off-by: Tomasz Lauda <tomasz.lauda@linux.intel.com>
Adds helper methods for initial building of IPC notifications.

Signed-off-by: Tomasz Lauda <tomasz.lauda@linux.intel.com>
Adds helper methods for IPC notifications allocation and free.
This way we can allocate only the size we need without preallocating
the maximum size. Free operation automatically handles synchronization.

Signed-off-by: Tomasz Lauda <tomasz.lauda@linux.intel.com>
Changes implementation of notifications allocation.
Instead of using common empty list of messages let's
switch to allocating message per source of notification.
This way we won't have problem with memory and lack of
empty messages and also we will be able to optimize memory.

Signed-off-by: Tomasz Lauda <tomasz.lauda@linux.intel.com>
@lgirdwood
Copy link
Member

SOFCI TEST

@lgirdwood
Copy link
Member

Rerunning CI now that WHL DUT has been rebooted.

@tlauda tlauda merged commit 3609fa6 into thesofproject:master Mar 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants