The issue is this:
static int dpaa_eth_macless_probe(struct platform_device *_of_dev)
{
[...]
INIT_LIST_HEAD(&priv->dpa_fq_list);
err = dpa_fq_probe_macless(dev, &priv->dpa_fq_list, RX);
if (!err)
err = dpa_fq_probe_macless(dev, &priv->dpa_fq_list,
TX);
if (err < 0)
goto fq_probe_failed;
[...]
/* Add the FQs to the interface, and make them active */
/* For MAC-less devices we only get here for RX frame queues
* initialization, which are the TX queues of the other
* partition.
* It is safe to rely on one partition to set the FQ taildrop
* threshold for the TX queues of the other partition
* because the ERN notifications will be received by the
* partition doing qman_enqueue.
*/
err = dpa_fqs_init(dev, &priv->dpa_fq_list, true);
if (err < 0)
goto fq_alloc_failed;
[...]
The priv->dpa_fq_list contains a list of FQ_TYPE_RX_PCD and FQ_TYPE_TX
items. I don't understand what the "For MAC-less devices we only get
here for RX frame queues initialization" means in this context. The
td_enable == true in dpa_fqs_init(). So, we have:
int dpa_fq_init(struct dpa_fq *dpa_fq, bool td_enable)
{
[...]
if (dpa_fq->fq_type == FQ_TYPE_TX ||
dpa_fq->fq_type == FQ_TYPE_TX_CONFIRM ||
dpa_fq->fq_type == FQ_TYPE_TX_CONF_MQ) {
[...]
initfq.we_mask |= QM_INITFQ_WE_OAC;
[...]
}
if (td_enable) {
initfq.we_mask |= QM_INITFQ_WE_TDTHRESH;
qm_fqd_taildrop_set(&initfq.fqd.td,
DPA_FQ_TD, 1);
initfq.fqd.fq_ctrl = QM_FQCTRL_TDE;
}
The td_enable == true && dpa_fq->fq_type == FQ_TYPE_TX causes later:
int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)
{
[...]
if (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {
/* And can't be set at the same time as TDTHRESH */
if (opts->we_mask & QM_INITFQ_WE_TDTHRESH)
return -EINVAL;
}
This aborts the initialization of the MAC-less driver. I don't
understand why this path doesn't happen on the SDK Linux system.
Update #3277.
Seed the receive buffers of each affine software portal only with 8
mclusters (16KiB) and not 128 (256KiB). We have processor count affine
software portals, see dpaa_bp_seed().
By default, the network interfaces use a pool channel, see
dpaa_get_channel() in dpaa_eth_priv_probe(). To enable a dedicated QMan
software portal, use libbsd,dedicated-portal = "enabled";. This option
is useful for special purpose 10Gbit/s Ethernet processing.
/ {
soc: soc@ffe000000 {
fman0: fman@400000 {
enet7: ethernet@f2000 {
libbsd,dedicated-portal = "enabled";
};
};
};
};
The dequeue support for processor affine QMan portals may be explicitly
disabled. The dequeue support is responsible for receiving frames and
completion of frame transmission, e.g. buffer recycling. Without at
least one enabled dequeue support, there will be no networking.
/ {
qman-portals@ff6000000 {
qman-portal@0 {
libbsd,dequeue = "disabled";
};
};
};
Do not enable stashing in the QMan software portal configuration
(QCSPi_CFG[RE, SE]) in case the PAMU support is not configured.
Signed-off-by: Sebastian Huber <sebastian.huber@embedded-brains.de>
Adding (p->irq_sources & ~QM_PIRQ_CSCI) to the clear mask means for
example we clear the QM_PIRQ_EQCI unconditionally. This is a problem in
case this interrupt happens after the read of the interrupt status and
before the interrupt status clear.
Imported from Freescale Linux repository
git://git.freescale.com/ppc/upstream/linux.git
commit 2774c204cd8bfc56a200ff4dcdfc9cdf5b6fc161.
Linux compatibility layer is partly from FreeBSD.