Figure 2 - uploaded by Jian Liang
Content may be subject to copyright.
Examples of model leakage and model protection, showing model matrices, where columns correspond to tasks and rows correspond to features. The 10-th task is an anomaly task that requires privacy protection. In (a), the matrix denoted by W (0) is firstly generated from i.i.d. uniform distribution U (0, 1). Then the last column is multiplied by 100. We then run Algorithm 2, taking m = 10, d = 5, T = 1, η = 1, 1 = = 0.1, δ = 0, K = 100 √ 5 and λ = 50. The noise matrix E is not added in (b) but added in (c). (a) shows W (0) ; (b) and (c) show W (1) under their respective settings. Columns shown have been divided by their respective 2 norms. In (b), the 10-th task results in significantly influences on the parameters of other models, especially on the first and the last features. In (c), the influences from the 10-th task are not significant. Meanwhile, the second and the fifth features are shared by most tasks as should be.

Examples of model leakage and model protection, showing model matrices, where columns correspond to tasks and rows correspond to features. The 10-th task is an anomaly task that requires privacy protection. In (a), the matrix denoted by W (0) is firstly generated from i.i.d. uniform distribution U (0, 1). Then the last column is multiplied by 100. We then run Algorithm 2, taking m = 10, d = 5, T = 1, η = 1, 1 = = 0.1, δ = 0, K = 100 √ 5 and λ = 50. The noise matrix E is not added in (b) but added in (c). (a) shows W (0) ; (b) and (c) show W (1) under their respective settings. Columns shown have been divided by their respective 2 norms. In (b), the 10-th task results in significantly influences on the parameters of other models, especially on the first and the last features. In (c), the influences from the 10-th task are not significant. Meanwhile, the second and the fifth features are shared by most tasks as should be.

Source publication
Preprint
Full-text available
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together. By contrast, single-task learning (STL) learns each individual task independently. MTL often leads to better trained models because they can leverage the commonalities among related tasks. However, because MTL algorithms will "transmit" information on diff...

Contexts in source publication

Context 1
... provide a running example for model leakage and model protection under different settings of Algorithm 2, as shown in Fig. 2. We generate models for m = 10 tasks, in which the data dimension is d = 5. The 10th task (the rightmost one), is an anomaly task that requires privacy protection. In Fig. 2 (a), the matrix denoted by W (0) is first generated from an i.i.d. uniform distribution U(0, 1). Then, the rightmost column is multiplied by 100. For MTL with ...
Context 2
... provide a running example for model leakage and model protection under different settings of Algorithm 2, as shown in Fig. 2. We generate models for m = 10 tasks, in which the data dimension is d = 5. The 10th task (the rightmost one), is an anomaly task that requires privacy protection. In Fig. 2 (a), the matrix denoted by W (0) is first generated from an i.i.d. uniform distribution U(0, 1). Then, the rightmost column is multiplied by 100. For MTL with model leakage, we execute Algorithm 2, setting T = 1, η = 1, , 1 = = 1e40, δ = 0, K = 100 √ 5 and λ = 50. It can be regarded that the noise matrix E is not added, since 1. The output ...
Context 3
... by W (0) is first generated from an i.i.d. uniform distribution U(0, 1). Then, the rightmost column is multiplied by 100. For MTL with model leakage, we execute Algorithm 2, setting T = 1, η = 1, , 1 = = 1e40, δ = 0, K = 100 √ 5 and λ = 50. It can be regarded that the noise matrix E is not added, since 1. The output model matrix W (1) is shown in Fig. 2 (b), in which the 10th task results in significantly influences on the parameters of other models: other models' parameters are similar to those of the 10th task, e.g., for each task, the first feature is the biggest, and the fifth feature is the smallest. For MTL with model protection, we execute Algorithm 2 with the same setting as above ...
Context 4
... which the 10th task results in significantly influences on the parameters of other models: other models' parameters are similar to those of the 10th task, e.g., for each task, the first feature is the biggest, and the fifth feature is the smallest. For MTL with model protection, we execute Algorithm 2 with the same setting as above except that we Fig. 2 (c), in which the influences from the 10th task are not significant: other models' parameters are not similar to those of the 10th task. Meanwhile, for W (0) , shown in Fig. 2 (a), for tasks 1-9, the 2 norms of the second and the fifth rows are the two largest ones; these are clearly shown in Fig. 2 (c). This result means the shared ...
Context 5
... the first feature is the biggest, and the fifth feature is the smallest. For MTL with model protection, we execute Algorithm 2 with the same setting as above except that we Fig. 2 (c), in which the influences from the 10th task are not significant: other models' parameters are not similar to those of the 10th task. Meanwhile, for W (0) , shown in Fig. 2 (a), for tasks 1-9, the 2 norms of the second and the fifth rows are the two largest ones; these are clearly shown in Fig. 2 (c). This result means the shared information between tasks is to use the second and the fifth features, which is successfully extracted by the MTL method with model ...
Context 6
... 2 with the same setting as above except that we Fig. 2 (c), in which the influences from the 10th task are not significant: other models' parameters are not similar to those of the 10th task. Meanwhile, for W (0) , shown in Fig. 2 (a), for tasks 1-9, the 2 norms of the second and the fifth rows are the two largest ones; these are clearly shown in Fig. 2 (c). This result means the shared information between tasks is to use the second and the fifth features, which is successfully extracted by the MTL method with model ...
Context 7
... we present the results mostly for our low-rank algorithm (denoted by MP-MTL-LR) because it always outperforms our group-sparse algorithm (MP-MTL-GS) in the above experiments. The results corresponding to School Data are shown in Fig. 11; the results corresponding to LSOA II Data are shown in Fig. 12. From those plots, we observe that on both real-world datasets, our MP-MTL method behaves similarly at different training-data percentages and outperforms DP-MTRL and DP-AGGR, especially when is ...

Similar publications

Conference Paper
Full-text available
We present a superluminal transmission line (STL) loaded with non-Foster negative capacitors. The loaded negative capacitors decrease the effective capacitance and the effective permittivity of the dielectric substrate of the TL. Since wave propagation velocity depends on effective capacitance of the line, it can be higher than the light speed c0 i...
Article
Full-text available
L’utilizzo del lembo libero di fascia temporoparietale per la prevenzione delle fistole dopo laringectomia di salvataggio. Riassunto: È stata condotta una revisione retrospettiva per valutare il ruolo del lembo di fascia temporoparietale (TPFF), confrontando i tassi di fistola faringocutanea postoperatoria (PCF) e gli esiti funzionali con quelli de...
Article
Full-text available
Background: Virtual implant planning systems integrate (cone beam-) computed tomography data to assess bone quantity and virtual models for the design of the implant-retained prosthesis and drill guides. Five commercially available systems for virtual implant planning were examined regarding the modalities of integration of radiographic data, virt...
Article
Full-text available
The paradigm shift towards a decentralised approach of cloud manufacturing requires tighter standardisation and efficient interfaces between additive manufacturing (AM) data and production. In parallel with technology advancements, it is important to consider the digital chain of information. Although a plethora of AM formats exist, only some are c...
Preprint
Full-text available
We propose a framework based on Recurrent Neural Networks (RNNs) to determine an optimal control strategy for a discrete-time system that is required to satisfy specifications given as Signal Temporal Logic (STL) formulae. RNNs can store information of a system over time, thus, enable us to determine satisfaction of the dynamic temporal requirement...