Abstract
As a sub-branch of Artificial Intelligence (AI), Machine Learning (ML) is an inductive method of problem solving which can accomplish tasks that once required human participation and discretion. As governments and other institutions increasingly deploy ML-based systems to predict, rate and act upon individuals’ behaviour or personal traits, there is growing political and legal demand for transparency so that the outcome of these systems could be interpretable, and thus contestable where necessary. Previous research has revealed that transparency in automated decisionmaking (ADM) entails not only openness and disclosure in the conventional sense but further administrative and technical measures such as the algorithmic audit or black-box testing of these systems. The implementation of such broadened scope of transparency inevitably involves the reproduction and/or adaptation of the relevant informational elements and components of the ML-based systems. This gives rise to the questions: (1) to what extent reliance on Intellectual Property (IP) rights could excuse automated decision-makers from the obligation of making transparent and contestable decisions, e.g., under Article 22 of the General Data Protection Regulation (GDPR); (2) what are the counter-arguments based on statutory exceptions and limitations restricting IP rights; and (3) what may be the possible solutions either within the IP regime or through regulatory intervention. Overall, the paper aims obtain a macro-view of the potential areas of conflict between the possible transparency measures/tools and the relevant IP regimes – i.e., copyright, sui generis database right and trade secret protection.
Original language | English |
---|---|
Pages (from-to) | 329-364 |
Number of pages | 36 |
Journal | European Review of Private Law |
Volume | 31 |
Issue number | 2/3 |
Publication status | Published - 30 Sept 2023 |