diff mbox series

[v3,4/6] remoteproc: qcom: q6v5-pil: Add custom dump function for modem

Message ID 20180727152003.11663-5-sibis@codeaurora.org (mailing list archive)
State New, archived
Headers show
Series Add coredump support for Q6v5 Modem remoteproc | expand

Commit Message

Sibi Sankar July 27, 2018, 3:20 p.m. UTC
The per segment dump function is responsible for loading the mba
before device memory segments associated with coredump can be populated
and for cleaning up the resources post coredump.

Signed-off-by: Sibi Sankar <sibis@codeaurora.org>
---
 drivers/remoteproc/qcom_q6v5_pil.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

Comments

Bjorn Andersson Oct. 8, 2018, 6:45 a.m. UTC | #1
On Fri 27 Jul 08:20 PDT 2018, Sibi Sankar wrote:
> diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
[..]
> +static void qcom_q6v5_dump_segment(struct rproc *rproc, void *ptr, size_t len,
> +								   void *priv)
> +{
> +	int ret = 0;
> +	struct q6v5 *qproc = (struct q6v5 *)rproc->priv;
> +	static u32 pending_mask;

I dislike that this is a static variable. And it tracks the segments
that has already been dumped, i.e. the !pending.

> +
> +	/* Unlock mba before copying segments */
> +	if (!qproc->mba_loaded)
> +		ret = q6v5_mba_load(qproc);
> +
> +	if (!ptr || ret)
> +		memset(priv, 0xff, len);
> +	else
> +		memcpy(priv, ptr, len);
> +
> +	pending_mask++;

This is a "count" and not a "mask".

I can see a few different cases where one would like to be able to pass
custom data/information from the segment-registration to the dump
function. So how about adding a "void *priv" to the dump segment.

For this particular case we could typecast segment->priv to an unsigned
long (as this is always the same size) and use that as a bitmask, which
we use to update pending_mask.

> +	if (pending_mask == qproc->valid_mask) {
> +		if (qproc->mba_loaded)
> +			q6v5_mba_reclaim(qproc);
> +		pending_mask = 0;
> +	}

I think it would be cleaner to reset pending_mask in the start function,
and then return early in this function when we have dumped all the
segments.

If so can pending_mask == 0 and pending_mask == all be the triggers for
loading and reclaiming the mba? So we don't have two different trackers
for this?

> +}
> +

Regards,
Bjorn
Sibi Sankar Oct. 9, 2018, 4:19 p.m. UTC | #2
Hi Bjorn,
Thanks for the review !

On 2018-10-08 12:15, Bjorn Andersson wrote:
> On Fri 27 Jul 08:20 PDT 2018, Sibi Sankar wrote:
>> diff --git a/drivers/remoteproc/qcom_q6v5_pil.c 
>> b/drivers/remoteproc/qcom_q6v5_pil.c
> [..]
>> +static void qcom_q6v5_dump_segment(struct rproc *rproc, void *ptr, 
>> size_t len,
>> +								   void *priv)
>> +{
>> +	int ret = 0;
>> +	struct q6v5 *qproc = (struct q6v5 *)rproc->priv;
>> +	static u32 pending_mask;
> 
> I dislike that this is a static variable. And it tracks the segments
> that has already been dumped, i.e. the !pending.
> 
>> +
>> +	/* Unlock mba before copying segments */
>> +	if (!qproc->mba_loaded)
>> +		ret = q6v5_mba_load(qproc);
>> +
>> +	if (!ptr || ret)
>> +		memset(priv, 0xff, len);
>> +	else
>> +		memcpy(priv, ptr, len);
>> +
>> +	pending_mask++;
> 
> This is a "count" and not a "mask".
> 

will rename it to count in the next re-spin. We can implement this as
a mask as well, the only disadvantage I see in that case is the need
for another flag to determine if mba is loaded since the mask for the
first segment may not be zero (the first segment may not be valid).

> I can see a few different cases where one would like to be able to pass
> custom data/information from the segment-registration to the dump
> function. So how about adding a "void *priv" to the dump segment.
> 
> For this particular case we could typecast segment->priv to an unsigned
> long (as this is always the same size) and use that as a bitmask, which
> we use to update pending_mask.
> 

sure will do the same

>> +	if (pending_mask == qproc->valid_mask) {
>> +		if (qproc->mba_loaded)
>> +			q6v5_mba_reclaim(qproc);
>> +		pending_mask = 0;
>> +	}
> 
> I think it would be cleaner to reset pending_mask in the start 
> function,
> and then return early in this function when we have dumped all the
> segments.
> 
> If so can pending_mask == 0 and pending_mask == all be the triggers for
> loading and reclaiming the mba? So we don't have two different trackers
> for this?
> 

with the private data stored per dump segment this becomes much simpler 
:)

>> +}
>> +
> 
> Regards,
> Bjorn
diff mbox series

Patch

diff --git a/drivers/remoteproc/qcom_q6v5_pil.c b/drivers/remoteproc/qcom_q6v5_pil.c
index eacf9f0bf49e..ac3342f9ea5a 100644
--- a/drivers/remoteproc/qcom_q6v5_pil.c
+++ b/drivers/remoteproc/qcom_q6v5_pil.c
@@ -182,6 +182,7 @@  struct q6v5 {
 	struct qcom_sysmon *sysmon;
 	bool need_mem_protection;
 	bool has_alt_reset;
+	u32 valid_mask;
 	int mpss_perm;
 	int mba_perm;
 	int version;
@@ -924,6 +925,30 @@  static int q6v5_mpss_load(struct q6v5 *qproc)
 	return ret < 0 ? ret : 0;
 }
 
+static void qcom_q6v5_dump_segment(struct rproc *rproc, void *ptr, size_t len,
+								   void *priv)
+{
+	int ret = 0;
+	struct q6v5 *qproc = (struct q6v5 *)rproc->priv;
+	static u32 pending_mask;
+
+	/* Unlock mba before copying segments */
+	if (!qproc->mba_loaded)
+		ret = q6v5_mba_load(qproc);
+
+	if (!ptr || ret)
+		memset(priv, 0xff, len);
+	else
+		memcpy(priv, ptr, len);
+
+	pending_mask++;
+	if (pending_mask == qproc->valid_mask) {
+		if (qproc->mba_loaded)
+			q6v5_mba_reclaim(qproc);
+		pending_mask = 0;
+	}
+}
+
 static int q6v5_start(struct rproc *rproc)
 {
 	struct q6v5 *qproc = (struct q6v5 *)rproc->priv;