I am recently reading some papers about privacy-preserving machine learning.
Some works incorporate the idea of differential privacy to protect the privacy of the training dataset when the model is published. The basic framework is as follows:
A trusted party (call it TP) possesses the training dataset in cleartext. TP then conducts some machine learning task and produces a prediction model. In the training process, TP will add some noise to the parameters (details omitted) such that the resulting model is noisy but is still accurate enough. It is then theoretically proved that some notion of differential privacy is achieved.
Meanwhile, I am aware of some "model inversion attacks" that can recover the training records given only a particular label and black-box access to the model.
So, what does differential privacy promise or guarantee in the context of privacy-preserving machine learning?
Specifically, suppose I am the owner of one record in the training dataset. The TP published a noisy model and tells me that it has $\epsilon$-privacy and the publication of the model will not violate my privacy. So what does it mean? Because I know that some model inversion attacks can recover my record. Does this violate the $\epsilon$-privacy?