Let $\mathfrak{g}=\mathfrak{g}_0\oplus\mathfrak{g}_1$ be a simple Lie superalgebra, and for simplicity assume $\mathfrak{g}=\mathfrak{osp}(m|2n)$, so that $\mathfrak{g}_0=\mathfrak{so}(m)\times\mathfrak{sp}(2n)$. Take $\mathfrak{h}\subset\mathfrak{g}$ to be the usual Cartan of diagonal matrices. What is wrong with the following reasoning:
Let $\lambda\in\mathfrak{h}^*$ be a dominant integral weight for $\mathfrak{g}_0$, so that if we denote by $L^0(\lambda)$ the unique irreducible $\mathfrak{g}_0$-module of highest weight $\lambda$, it is finite-dimensional.
Now let $L(\lambda)$ denote the unique irreducible $\mathfrak{g}$-module of highest weight $\lambda$. Then it is naturally a $\mathfrak{g}_0$-module of highest weight $\lambda$, so there is an injective map of $\mathfrak{g}_0$-modules $L^0(\lambda)\to L(\lambda)$. Hence by the $Hom$ - $\otimes$ adjunction we get a nonzero map of $\mathfrak{g}$-modules $U\mathfrak{g}\otimes_{U\mathfrak{g}_0}L^0(\lambda)\to L(\lambda)$. Since $L(\lambda)$ is irreducible, this map (being nonzero) must be surjective.
But PBW for superalgebras tells us that $U\mathfrak{g}$ is a finite free $U\mathfrak{g}_0$ module, so $U\mathfrak{g}\otimes_{U\mathfrak{g}_0}L^0(\lambda)$ is finite dimensional, which implies that $L(\lambda)$ is finite dimensional.
I know this reasoning must be wrong because the condition that $\lambda$ be dominant integral with respect to $\mathfrak{g}_0$ is not sufficient for $L(\lambda)$ to be finite dimensional.
Any help/ideas/hints is greatly appreciated!