diff mbox series

[v8,3/9] firmware: arm_scmi: Add notification dispatch and delivery

Message ID 20200520081118.54897-4-cristian.marussi@arm.com (mailing list archive)
State New, archived
Headers show
Series SCMI Notifications Core Support | expand

Commit Message

Cristian Marussi May 20, 2020, 8:11 a.m. UTC
Add core SCMI Notifications dispatch and delivery support logic which is
able, at first, to dispatch well-known received events from the RX ISR to
the dedicated deferred worker, and then, from there, to final deliver the
events to the registered users' callbacks.

Dispatch and delivery is just added here, still not enabled.

Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
---
V7 --> V8
- Fixed enabled check in scmi_notify() not to use atomics
- Added a few comments about queueing works
V5 --> V6
- added handle argument to fill_custom_report()
V4 --> V5
- fixed kernel-doc
- fixed unneded var initialization
V3 --> V4
- dispatcher now handles dequeuing of events in chunks (header+payload):
  handling of these in_flight events let us remove one unneeded memcpy
  on RX interrupt path (scmi_notify)
- deferred dispatcher now access their own per-protocol handlers' table
  reducing locking contention on the RX path
V2 --> V3
- exposing wq in sysfs via WQ_SYSFS
V1 --> V2
- splitted out of V1 patch 04
- moved from IDR maps to real HashTables to store event_handlers
- simplified delivery logic
---
 drivers/firmware/arm_scmi/notify.c | 354 ++++++++++++++++++++++++++++-
 drivers/firmware/arm_scmi/notify.h |  10 +
 2 files changed, 362 insertions(+), 2 deletions(-)

Comments

Sudeep Holla June 8, 2020, 5:03 p.m. UTC | #1
On Wed, May 20, 2020 at 09:11:12AM +0100, Cristian Marussi wrote:
> Add core SCMI Notifications dispatch and delivery support logic which is
> able, at first, to dispatch well-known received events from the RX ISR to
> the dedicated deferred worker, and then, from there, to final deliver the
> events to the registered users' callbacks.
> 
> Dispatch and delivery is just added here, still not enabled.
> 
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> ---
>  drivers/firmware/arm_scmi/notify.c | 354 ++++++++++++++++++++++++++++-
>  drivers/firmware/arm_scmi/notify.h |  10 +
>  2 files changed, 362 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> index 7cf61dbe2a8e..d582f71fde5b 100644
> --- a/drivers/firmware/arm_scmi/notify.c
> +++ b/drivers/firmware/arm_scmi/notify.c

[...]

> @@ -1085,6 +1422,12 @@ int scmi_notification_init(struct scmi_handle *handle)
>  	ni->gid = gid;
>  	ni->handle = handle;
>  
> +	ni->notify_wq = alloc_workqueue("scmi_notify",
> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
> +					0);

What's the use of WQ_SYSFS for SCMI notifications ? Do we need it ?
Cristian Marussi June 17, 2020, 11:31 p.m. UTC | #2
On Mon, Jun 08, 2020 at 06:03:46PM +0100, Sudeep Holla wrote:
> On Wed, May 20, 2020 at 09:11:12AM +0100, Cristian Marussi wrote:
> > Add core SCMI Notifications dispatch and delivery support logic which is
> > able, at first, to dispatch well-known received events from the RX ISR to
> > the dedicated deferred worker, and then, from there, to final deliver the
> > events to the registered users' callbacks.
> > 
> > Dispatch and delivery is just added here, still not enabled.
> > 
> > Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> > Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
> > ---
> >  drivers/firmware/arm_scmi/notify.c | 354 ++++++++++++++++++++++++++++-
> >  drivers/firmware/arm_scmi/notify.h |  10 +
> >  2 files changed, 362 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
> > index 7cf61dbe2a8e..d582f71fde5b 100644
> > --- a/drivers/firmware/arm_scmi/notify.c
> > +++ b/drivers/firmware/arm_scmi/notify.c
> 
> [...]
> 
> > @@ -1085,6 +1422,12 @@ int scmi_notification_init(struct scmi_handle *handle)
> >  	ni->gid = gid;
> >  	ni->handle = handle;
> >  
> > +	ni->notify_wq = alloc_workqueue("scmi_notify",
> > +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
> > +					0);
> 
> What's the use of WQ_SYSFS for SCMI notifications ? Do we need it ?
> 

Lukasz asked for it, when we were talking about workqueues' priorities configurability.
(not implemented in this series)

Thanks

Cristian
> -- 
> Regards,
> Sudeep
Lukasz Luba June 18, 2020, 8:37 a.m. UTC | #3
On 6/18/20 12:31 AM, Cristian Marussi wrote:
> On Mon, Jun 08, 2020 at 06:03:46PM +0100, Sudeep Holla wrote:
>> On Wed, May 20, 2020 at 09:11:12AM +0100, Cristian Marussi wrote:
>>> Add core SCMI Notifications dispatch and delivery support logic which is
>>> able, at first, to dispatch well-known received events from the RX ISR to
>>> the dedicated deferred worker, and then, from there, to final deliver the
>>> events to the registered users' callbacks.
>>>
>>> Dispatch and delivery is just added here, still not enabled.
>>>
>>> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>>> Signed-off-by: Cristian Marussi <cristian.marussi@arm.com>
>>> ---
>>>   drivers/firmware/arm_scmi/notify.c | 354 ++++++++++++++++++++++++++++-
>>>   drivers/firmware/arm_scmi/notify.h |  10 +
>>>   2 files changed, 362 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
>>> index 7cf61dbe2a8e..d582f71fde5b 100644
>>> --- a/drivers/firmware/arm_scmi/notify.c
>>> +++ b/drivers/firmware/arm_scmi/notify.c
>>
>> [...]
>>
>>> @@ -1085,6 +1422,12 @@ int scmi_notification_init(struct scmi_handle *handle)
>>>   	ni->gid = gid;
>>>   	ni->handle = handle;
>>>   
>>> +	ni->notify_wq = alloc_workqueue("scmi_notify",
>>> +					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
>>> +					0);
>>
>> What's the use of WQ_SYSFS for SCMI notifications ? Do we need it ?
>>
> 
> Lukasz asked for it, when we were talking about workqueues' priorities configurability.
> (not implemented in this series)

I confirm, I've asked if we can have a mechanism to control these
workqueues. They will be running concurrently with other CFS
tasks which could cause delays for them. They could also be scheduled
on a random core: big or little (depends on its utilization) but maybe
we would like to pin them explicitly to some cores, i.e
little only. We have also discussed a possible mechanism based on RT
threads (which could avoid CFS delays), but that would require a lot of
changes, so this flag here gives us some control.
But if you decide to remove this flag, we would probably find a solution
using uclamp or similar when needed.

Regards,
Lukasz

> 
> Thanks
> 
> Cristian
>> -- 
>> Regards,
>> Sudeep
diff mbox series

Patch

diff --git a/drivers/firmware/arm_scmi/notify.c b/drivers/firmware/arm_scmi/notify.c
index 7cf61dbe2a8e..d582f71fde5b 100644
--- a/drivers/firmware/arm_scmi/notify.c
+++ b/drivers/firmware/arm_scmi/notify.c
@@ -47,6 +47,27 @@ 
  * as described in the SCMI Protocol specification, while src_id represents an
  * optional, protocol dependent, source identifier (like domain_id, perf_id
  * or sensor_id and so forth).
+ *
+ * Upon reception of a notification message from the platform the SCMI RX ISR
+ * passes the received message payload and some ancillary information (including
+ * an arrival timestamp in nanoseconds) to the core via @scmi_notify() which
+ * pushes the event-data itself on a protocol-dedicated kfifo queue for further
+ * deferred processing as specified in @scmi_events_dispatcher().
+ *
+ * Each protocol has it own dedicated work_struct and worker which, once kicked
+ * by the ISR, takes care to empty its own dedicated queue, deliverying the
+ * queued items into the proper notification-chain: notifications processing can
+ * proceed concurrently on distinct workers only between events belonging to
+ * different protocols while delivery of events within the same protocol is
+ * still strictly sequentially ordered by time of arrival.
+ *
+ * Events' information is then extracted from the SCMI Notification messages and
+ * conveyed, converted into a custom per-event report struct, as the void *data
+ * param to the user callback provided by the registered notifier_block, so that
+ * from the user perspective his callback will look invoked like:
+ *
+ * int user_cb(struct notifier_block *nb, unsigned long event_id, void *report)
+ *
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -66,6 +87,7 @@ 
 #include <linux/scmi_protocol.h>
 #include <linux/slab.h>
 #include <linux/types.h>
+#include <linux/workqueue.h>
 
 #include "notify.h"
 
@@ -148,6 +170,9 @@ 
 #define REVT_NOTIFY_DISABLE(revt, eid, sid)				       \
 	((revt)->proto->ops->set_notify_enabled((revt)->proto->ni->handle,     \
 						(eid), (sid), false))
+#define REVT_FILL_REPORT(revt, ...)					       \
+	((revt)->proto->ops->fill_custom_report((revt)->proto->ni->handle,     \
+						__VA_ARGS__))
 
 struct scmi_registered_protocol_events_desc;
 
@@ -157,6 +182,7 @@  struct scmi_registered_protocol_events_desc;
  * @gid: GroupID used for devres
  * @handle: A reference to the platform instance
  * @init_work: A work item to perform final initializations of pending handlers
+ * @notify_wq: A reference to the allocated Kernel cmwq
  * @pending_mtx: A mutex to protect @pending_events_handlers
  * @registered_protocols: A statically allocated array containing pointers to
  *			  all the registered protocol-level specific information
@@ -173,6 +199,8 @@  struct scmi_notify_instance {
 
 	struct work_struct				init_work;
 
+	struct workqueue_struct				*notify_wq;
+
 	struct mutex					pending_mtx;
 	struct scmi_registered_protocol_events_desc	**registered_protocols;
 	DECLARE_HASHTABLE(pending_events_handlers, 8);
@@ -182,12 +210,16 @@  struct scmi_notify_instance {
  * struct events_queue  - Describes a queue and its associated worker
  * @sz: Size in bytes of the related kfifo
  * @kfifo: A dedicated Kernel kfifo descriptor
+ * @notify_work: A custom work item bound to this queue
+ * @wq: A reference to the associated workqueue
  *
  * Each protocol has its own dedicated events_queue descriptor.
  */
 struct events_queue {
 	size_t				sz;
 	struct kfifo			kfifo;
+	struct work_struct		notify_work;
+	struct workqueue_struct		*wq;
 };
 
 /**
@@ -309,9 +341,264 @@  struct scmi_event_handler {
 
 #define IS_HNDL_PENDING(hndl)	((hndl)->r_evt == NULL)
 
+static struct scmi_event_handler *
+scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key);
+static void scmi_put_active_handler(struct scmi_notify_instance *ni,
+				    struct scmi_event_handler *hndl);
 static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni,
 				      struct scmi_event_handler *hndl);
 
+/**
+ * scmi_lookup_and_call_event_chain()  - Lookup the proper chain and call it
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The key to use to lookup the related notification chain
+ * @report: The customized event-specific report to pass down to the callbacks
+ *	    as their *data parameter.
+ */
+static inline void
+scmi_lookup_and_call_event_chain(struct scmi_notify_instance *ni,
+				 u32 evt_key, void *report)
+{
+	int ret;
+	struct scmi_event_handler *hndl;
+
+	/* Here ensure the event handler cannot vanish while using it */
+	hndl = scmi_get_active_handler(ni, evt_key);
+	if (IS_ERR_OR_NULL(hndl))
+		return;
+
+	ret = blocking_notifier_call_chain(&hndl->chain,
+					   KEY_XTRACT_EVT_ID(evt_key),
+					   report);
+	/* Notifiers are NOT supposed to cut the chain ... */
+	WARN_ON_ONCE(ret & NOTIFY_STOP_MASK);
+
+	scmi_put_active_handler(ni, hndl);
+}
+
+/**
+ * scmi_process_event_header()  - Dequeue and process an event header
+ * @eq: The queue to use
+ * @pd: The protocol descriptor to use
+ *
+ * Read an event header from the protocol queue into the dedicated scratch
+ * buffer and looks for a matching registered event; in case an anomalously
+ * sized read is detected just flush the queue.
+ *
+ * Return:
+ * * a reference to the matching registered event when found
+ * * ERR_PTR(-EINVAL) when NO registered event could be found
+ * * NULL when the queue is empty
+ */
+static inline struct scmi_registered_event *
+scmi_process_event_header(struct events_queue *eq,
+			  struct scmi_registered_protocol_events_desc *pd)
+{
+	unsigned int outs;
+	struct scmi_registered_event *r_evt;
+
+	outs = kfifo_out(&eq->kfifo, pd->eh,
+			 sizeof(struct scmi_event_header));
+	if (!outs)
+		return NULL;
+	if (outs != sizeof(struct scmi_event_header)) {
+		pr_err("SCMI Notifications: corrupted EVT header. Flush.\n");
+		kfifo_reset_out(&eq->kfifo);
+		return NULL;
+	}
+
+	r_evt = SCMI_GET_REVT_FROM_PD(pd, pd->eh->evt_id);
+	if (!r_evt)
+		r_evt = ERR_PTR(-EINVAL);
+
+	return r_evt;
+}
+
+/**
+ * scmi_process_event_payload()  - Dequeue and process an event payload
+ * @eq: The queue to use
+ * @pd: The protocol descriptor to use
+ * @r_evt: The registered event descriptor to use
+ *
+ * Read an event payload from the protocol queue into the dedicated scratch
+ * buffer, fills a custom report and then look for matching event handlers and
+ * call them; skip any unknown event (as marked by scmi_process_event_header())
+ * and in case an anomalously sized read is detected just flush the queue.
+ *
+ * Return: False when the queue is empty
+ */
+static inline bool
+scmi_process_event_payload(struct events_queue *eq,
+			   struct scmi_registered_protocol_events_desc *pd,
+			   struct scmi_registered_event *r_evt)
+{
+	u32 src_id, key;
+	unsigned int outs;
+	void *report = NULL;
+
+	outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz);
+	if (unlikely(!outs))
+		return false;
+
+	/* Any in-flight event has now been officially processed */
+	pd->in_flight = NULL;
+
+	if (unlikely(outs != pd->eh->payld_sz)) {
+		pr_err("SCMI Notifications: corrupted EVT Payload. Flush.\n");
+		kfifo_reset_out(&eq->kfifo);
+		return false;
+	}
+
+	if (IS_ERR(r_evt)) {
+		pr_warn("SCMI Notifications: SKIP UNKNOWN EVT - proto:%X  evt:%d\n",
+			pd->id, pd->eh->evt_id);
+		return true;
+	}
+
+	report = REVT_FILL_REPORT(r_evt, pd->eh->evt_id, pd->eh->timestamp,
+				  pd->eh->payld, pd->eh->payld_sz,
+				  r_evt->report, &src_id);
+	if (!report) {
+		pr_err("SCMI Notifications: Report not available - proto:%X  evt:%d\n",
+		       pd->id, pd->eh->evt_id);
+		return true;
+	}
+
+	/* At first search for a generic ALL src_ids handler... */
+	key = MAKE_ALL_SRCS_KEY(pd->id, pd->eh->evt_id);
+	scmi_lookup_and_call_event_chain(pd->ni, key, report);
+
+	/* ...then search for any specific src_id */
+	key = MAKE_HASH_KEY(pd->id, pd->eh->evt_id, src_id);
+	scmi_lookup_and_call_event_chain(pd->ni, key, report);
+
+	return true;
+}
+
+/**
+ * scmi_events_dispatcher()  - Common worker logic for all work items.
+ * @work: The work item to use, which is associated to a dedicated events_queue
+ *
+ * Logic:
+ *  1. dequeue one pending RX notification (queued in SCMI RX ISR context)
+ *  2. generate a custom event report from the received event message
+ *  3. lookup for any registered ALL_SRC_IDs handler:
+ *    - > call the related notification chain passing in the report
+ *  4. lookup for any registered specific SRC_ID handler:
+ *    - > call the related notification chain passing in the report
+ *
+ * Note that:
+ * * a dedicated per-protocol kfifo queue is used: in this way an anomalous
+ *   flood of events cannot saturate other protocols' queues.
+ * * each per-protocol queue is associated to a distinct work_item, which
+ *   means, in turn, that:
+ *   + all protocols can process their dedicated queues concurrently
+ *     (since notify_wq:max_active != 1)
+ *   + anyway at most one worker instance is allowed to run on the same queue
+ *     concurrently: this ensures that we can have only one concurrent
+ *     reader/writer on the associated kfifo, so that we can use it lock-less
+ *
+ * Context: Process context.
+ */
+static void scmi_events_dispatcher(struct work_struct *work)
+{
+	struct events_queue *eq;
+	struct scmi_registered_protocol_events_desc *pd;
+	struct scmi_registered_event *r_evt;
+
+	eq = container_of(work, struct events_queue, notify_work);
+	pd = container_of(eq, struct scmi_registered_protocol_events_desc,
+			  equeue);
+	/*
+	 * In order to keep the queue lock-less and the number of memcopies
+	 * to the bare minimum needed, the dispatcher accounts for the
+	 * possibility of per-protocol in-flight events: i.e. an event whose
+	 * reception could end up being split across two subsequent runs of this
+	 * worker, first the header, then the payload.
+	 */
+	do {
+		if (likely(!pd->in_flight)) {
+			r_evt = scmi_process_event_header(eq, pd);
+			if (!r_evt)
+				break;
+			pd->in_flight = r_evt;
+		} else {
+			r_evt = pd->in_flight;
+		}
+	} while (scmi_process_event_payload(eq, pd, r_evt));
+}
+
+/**
+ * scmi_notify()  - Queues a notification for further deferred processing
+ * @handle: The handle identifying the platform instance from which the
+ *	    dispatched event is generated
+ * @proto_id: Protocol ID
+ * @evt_id: Event ID (msgID)
+ * @buf: Event Message Payload (without the header)
+ * @len: Event Message Payload size
+ * @ts: RX Timestamp in nanoseconds (boottime)
+ *
+ * Context: Called in interrupt context to queue a received event for
+ * deferred processing.
+ *
+ * Return: 0 on Success
+ */
+int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
+		const void *buf, size_t len, u64 ts)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_header eh;
+	struct scmi_notify_instance *ni;
+
+	/* Ensure notify_priv is updated */
+	smp_rmb();
+	if (unlikely(!handle->notify_priv))
+		return 0;
+	ni = handle->notify_priv;
+
+	r_evt = SCMI_GET_REVT(ni, proto_id, evt_id);
+	if (unlikely(!r_evt))
+		return -EINVAL;
+
+	if (unlikely(len > r_evt->evt->max_payld_sz)) {
+		pr_err("SCMI Notifications: discard badly sized message\n");
+		return -EINVAL;
+	}
+	if (unlikely(kfifo_avail(&r_evt->proto->equeue.kfifo) <
+		     sizeof(eh) + len)) {
+		pr_warn("SCMI Notifications: queue full dropping proto_id:%d  evt_id:%d  ts:%lld\n",
+			proto_id, evt_id, ts);
+		return -ENOMEM;
+	}
+
+	eh.timestamp = ts;
+	eh.evt_id = evt_id;
+	eh.payld_sz = len;
+	/*
+	 * Header and payload are enqueued with two distinct kfifo_in() (so non
+	 * atomic), but this situation is handled properly on the consumer side
+	 * with in-flight events tracking.
+	 */
+	kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh));
+	kfifo_in(&r_evt->proto->equeue.kfifo, buf, len);
+	/*
+	 * Don't care about return value here since we just want to ensure that
+	 * a work is queued all the times whenever some items have been pushed
+	 * on the kfifo:
+	 * - if work was already queued it will simply fail to queue a new one
+	 *   since it is not needed
+	 * - if work was not queued already it will be now, even in case work
+	 *   was in fact already running: this behavior avoids any possible race
+	 *   when this function pushes new items onto the kfifos after the
+	 *   related executing worker had already determined the kfifo to be
+	 *   empty and it was terminating.
+	 */
+	queue_work(r_evt->proto->equeue.wq,
+		   &r_evt->proto->equeue.notify_work);
+
+	return 0;
+}
+
 /**
  * scmi_kfifo_free()  - Devres action helper to free the kfifo
  * @kfifo: The kfifo to free
@@ -334,13 +621,22 @@  static void scmi_kfifo_free(void *kfifo)
 static int scmi_initialize_events_queue(struct scmi_notify_instance *ni,
 					struct events_queue *equeue, size_t sz)
 {
+	int ret;
+
 	if (kfifo_alloc(&equeue->kfifo, sz, GFP_KERNEL))
 		return -ENOMEM;
 	/* Size could have been roundup to power-of-two */
 	equeue->sz = kfifo_size(&equeue->kfifo);
 
-	return devm_add_action_or_reset(ni->handle->dev, scmi_kfifo_free,
-					&equeue->kfifo);
+	ret = devm_add_action_or_reset(ni->handle->dev, scmi_kfifo_free,
+				       &equeue->kfifo);
+	if (ret)
+		return ret;
+
+	INIT_WORK(&equeue->notify_work, scmi_events_dispatcher);
+	equeue->wq = ni->notify_wq;
+
+	return ret;
 }
 
 /**
@@ -741,6 +1037,37 @@  scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key)
 	return __scmi_event_handler_get_ops(ni, evt_key, true);
 }
 
+/**
+ * scmi_get_active_handler()  - Helper to get active handlers only
+ * @ni: A reference to the notification instance to use
+ * @evt_key: The event key to use
+ *
+ * Search for the desired handler matching the key only in the per-protocol
+ * table of registered handlers: this is called only from the dispatching path
+ * so want to be as quick as possible and do not care about pending.
+ *
+ * Return: A properly refcounted active handler
+ */
+static struct scmi_event_handler *
+scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key)
+{
+	struct scmi_registered_event *r_evt;
+	struct scmi_event_handler *hndl = NULL;
+
+	r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key),
+			      KEY_XTRACT_EVT_ID(evt_key));
+	if (likely(r_evt)) {
+		mutex_lock(&r_evt->proto->registered_mtx);
+		hndl = KEY_FIND(r_evt->proto->registered_events_handlers,
+				hndl, evt_key);
+		if (likely(hndl))
+			refcount_inc(&hndl->users);
+		mutex_unlock(&r_evt->proto->registered_mtx);
+	}
+
+	return hndl;
+}
+
 /**
  * __scmi_enable_evt()  - Enable/disable events generation
  * @r_evt: The registered event to act upon
@@ -856,6 +1183,16 @@  static void scmi_put_handler(struct scmi_notify_instance *ni,
 	mutex_unlock(&ni->pending_mtx);
 }
 
+static void scmi_put_active_handler(struct scmi_notify_instance *ni,
+					  struct scmi_event_handler *hndl)
+{
+	struct scmi_registered_event *r_evt = hndl->r_evt;
+
+	mutex_lock(&r_evt->proto->registered_mtx);
+	scmi_put_handler_unlocked(ni, hndl);
+	mutex_unlock(&r_evt->proto->registered_mtx);
+}
+
 /**
  * scmi_event_handler_enable_events()  - Enable events associated to an handler
  * @hndl: The Event handler to act upon
@@ -1085,6 +1422,12 @@  int scmi_notification_init(struct scmi_handle *handle)
 	ni->gid = gid;
 	ni->handle = handle;
 
+	ni->notify_wq = alloc_workqueue("scmi_notify",
+					WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS,
+					0);
+	if (!ni->notify_wq)
+		goto err;
+
 	ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO,
 						sizeof(char *), GFP_KERNEL);
 	if (!ni->registered_protocols)
@@ -1126,5 +1469,12 @@  void scmi_notification_exit(struct scmi_handle *handle)
 		return;
 	ni = handle->notify_priv;
 
+	handle->notify_priv = NULL;
+	/* Ensure handle is up to date */
+	smp_wmb();
+
+	/* Destroy while letting pending work complete */
+	destroy_workqueue(ni->notify_wq);
+
 	devres_release_group(ni->handle->dev, ni->gid);
 }
diff --git a/drivers/firmware/arm_scmi/notify.h b/drivers/firmware/arm_scmi/notify.h
index f0561fb30970..a55c041180bf 100644
--- a/drivers/firmware/arm_scmi/notify.h
+++ b/drivers/firmware/arm_scmi/notify.h
@@ -47,6 +47,11 @@  struct scmi_event {
  *			using the proper custom protocol commands.
  *			Return true if at least one the required src_id
  *			has been successfully enabled/disabled
+ * @fill_custom_report: fills a custom event report from the provided
+ *			event message payld identifying the event
+ *			specific src_id.
+ *			Return NULL on failure otherwise @report now fully
+ *			populated
  *
  * Context: Helpers described in &struct scmi_protocol_event_ops are called
  *	    only in process context.
@@ -54,6 +59,9 @@  struct scmi_event {
 struct scmi_protocol_event_ops {
 	bool (*set_notify_enabled)(const struct scmi_handle *handle,
 				   u8 evt_id, u32 src_id, bool enabled);
+	void *(*fill_custom_report)(const struct scmi_handle *handle,
+				    u8 evt_id, u64 timestamp, const void *payld,
+				    size_t payld_sz, void *report, u32 *src_id);
 };
 
 int scmi_notification_init(struct scmi_handle *handle);
@@ -64,5 +72,7 @@  int scmi_register_protocol_events(const struct scmi_handle *handle,
 				  const struct scmi_protocol_event_ops *ops,
 				  const struct scmi_event *evt, int num_events,
 				  int num_sources);
+int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id,
+		const void *buf, size_t len, u64 ts);
 
 #endif /* _SCMI_NOTIFY_H */