Do you or anybody else reading this have experience with differential privacy techniques on relatively small datasets (less than 10k people, say)? I’ve only heard of differential privacy used in the context of machine learning and massive datasets.
Well, I am far from expert, but my understanding is that differential privacy operates on queries as opposed to individual datapoints. But there are tools s.a. randomized response which will provide plausible deniability to individual responses.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Do you or anybody else reading this have experience with differential privacy techniques on relatively small datasets (less than 10k people, say)?
I’ve only heard of differential privacy used in the context of machine learning and massive datasets.
Well, I am far from expert, but my understanding is that differential privacy operates on queries as opposed to individual datapoints. But there are tools s.a. randomized response which will provide plausible deniability to individual responses.