I guess the following inequality
$$\sum_{i=1}^n g (p_i) \ge \sum_{i=1}^n g \left (\frac{-p_i \log p_i}{H(\boldsymbol{p})} \right )$$
holds for any continuous convex function $g$ and any probability vector $\boldsymbol{p}=(p_1,\dots,p_n)\ge 0$ with $\sum_{i=1}^np_i=1$ where $H(\boldsymbol{p})=-\sum_{i=1}^n p_i\log p_i $ denotes the Shannon entropy of the probability distrbution $\boldsymbol{p}$. This is equivalent to that the following majorization relation holds
$$ -\boldsymbol{p}\log \boldsymbol{p} \prec H(\boldsymbol{p})\boldsymbol{p}.$$
There are many other equivalent conditions for a majorization relation. You can see page 14 of this monograph on majorization for a summary (page 45 of the pdf). Hence, proving a majorization relation have many interesting and non-trivial consequences. It is why I think assessing the relation is important because it cannot be simply obtained using procedures known for generating majorization relations as far as I could check (see Section 5 of the book).
In a part of my answer to this question, you can find a proof for the case where all probabilities are less than $e^{-1}$, where it is equivalently shown that $\boldsymbol{p}$ majorizes the normalized vector $\small \frac{-\boldsymbol{p}\log \boldsymbol{p}}{H(\boldsymbol{p})}$.
I verified the above conjecture for $\color{green}{n=2}$ by finding the doubly stochastic matrix that salsifies the following equation (existence of such a matrix is equivalent to the above majorization):
$$ \begin{bmatrix} -p_1\log p_1 \\ -p_2\log p_2 \end{bmatrix}= \begin{bmatrix} x & 1-x \\ 1-x & x \end{bmatrix} \begin{bmatrix} p_1\left (-p_1\log p_1-p_2\log p_2 \right) \\ p_2\left (-p_1\log p_1-p_2\log p_2 \right) \end{bmatrix} $$
where $x$ is given by ($p_2=1-p_1$):
$$x=\frac{p_1\log p_1}{\left(2p_1-1\right)\left(p_1\log x+\left(1-p_1\right)\log\left(1-p_1\right)\right)}-\frac{1-p_1}{\left(2p_1-1\right)},$$
which is always in $[0,1]$ for any $p_1 \in [0,1]$ (source-2).
Update 1:
In an answer below, I proved a related weaker result.
To reach a proof or a counterexample, one should notice when all probabilities are less than $e^{-1}$, the claim holds. Thus, there two remaining cases that should be examined where one or two of the probabilities are greater than $e^{-1}$ (as the probabilities sum to $1$).
I also verified the conjecture for $\color{green}{n=3,4}$ by numerically checking the following equivalent condition (source-3, source -4):
$$\sum_{i=1}^n \max \left (p_i-C, 0 \right ) \ge \sum_{i=1}^n \max \left (\frac{-p_i \log p_i}{H(\boldsymbol{p})}-C, 0 \right ), C \in \mathbb R. $$