Welcome to datascience.SE ! I'm going to have to partly disagree with the answer by @Aviral_107118021. Unfortunately, I think you have far too many features for a sample size of 226, to achieve a reasonable vanilla logistic regression model. This is a common problem and is often just described as "$p \gg n$" (the number of variables/features is much greater than the number of samples). The danger is that you end up with a model that fits the data extremely well, but is severely overfitted and will not generalise to further (unseen) data, severely limiting it's usefulness.
In statistics, one common rule of thumb is that a minimum of 10-20 samples are needed for each variable (feature) in the dataset. Frank Harrell has written extensively about this, and in the linked answer below he says that at least 96 samples are needed to reliably estimate an intercept-only logistic regression model (ie., a model without any variables/features). Then for each variable, 10-20 samples are needed for each additional variable/feature:
https://stats.stackexchange.com/questions/11724/minimum-number-of-observations-for-logistic-regression
https://stats.stackexchange.com/questions/26016/sample-size-for-logistic-regression
https://stats.stackexchange.com/questions/29612/minimum-number-of-observations-for-multiple-linear-regression
[This is about linear regression, not logistic regression, but many of the issues apply to both and there are some very interesting answers in that thread].
Obviously this rule of thumb (and there are other similar ones) should not be applied without careful thought. Some other issues to be aware of or use are:
Dimensionality Reduction (PCA, for example) can be used to reduce the number of features in the dataset - extremely highly correlated features could simply be removed.
Regularisation, such as LASSO and ridge regression, can overcome some of the problems of $p \gg n$
Cross Validation. Obviously there will be a reduction in the effective sample size when you split the data for validation, which compounds the problem but CV is a great way to mitigate the issue of overfitting.
Simpler models, such as SVM, or Naïve Bayes could also be explored
Bootstrapping could be employed as it helps address the small dataset challenge by generating multiple randomly sampled datasets, allowing more robust calculation of confidence intervals for metrics used. It also supports ensemble learning which might further reduce overfitting and enhances model selection by providing performance stability across different subsets.
I want to know if it's worth even attempting and using the data for publication.
I would say that it is worth attempting, but be sure to mitigate the $p \gg n$ problem as much as possible, and write about that in your methods section. If you can find some similar data to validate the model against, I would highly recommend that you do so.