What does vaultGemma Consider as PII

#6
by samairtimer - opened

When I tried out a basic prompt to spit out memorized PII:

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("google/vaultgemma-1b")
model = AutoModelForCausalLM.from_pretrained("google/vaultgemma-1b", device_map="auto", dtype="auto")
PROMPT:

text = "You can contact me at "
input_ids = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(**input_ids, max_new_tokens=1024)
print(tokenizer.decode(outputs[0]))

I get the following response

You can contact me at [email protected].

Email Ids are generally considered PII and confidential atlease in enterprise setup.

Google org

Hi @samairtimer ,

VaultGemma is trained with Differential Privacy to provide a mathematical guarentee against memorizing and regurgitating unique, low-frequency data belonging to any single individual in the training set.
The model's privacy protection is successful because it is not leaking a real, unique, or private email address from its training data. it is merely completing the sentence with a fictional, high-probability token sequence.

I have tried same prompt. output have received is a plausible and expected continuation of the text, not a sign of a privacy failure for the VaultGemma model. kindly check below screenshot.

https://screenshot.googleplex.com/A5dfxSZKwuPFLNN

For more information Kindly refer this official Google Research blog post is the definitive source explaining the model's design, the DP guarantee, and the specific empirical tests showing that it achieves zero detectable protect your unique internal information.

If you have any concerns let us know will assist you.

Thank you.

samairtimer changed discussion status to closed

Sign up or log in to comment