Society is in crisis. The gap between the poor and the rich – whether in terms of age, income or skills – keeps widening as inequality grows markedly. Artificial Intelligence (AI) hold great potential for improvement. While AI is often viewed as a threat to social justice, AI technology, we argue, pose an opportunity to end inequality once and for good. Machine learning in language translation technology can collapse the barriers between third world countries and the West. Algorithmic decision-making can lessen the bias effects toward minority groups. From transportation, healthcare, agriculture to sustainably and governance - the positive applications of AI are unlimited in scope. Still, the question remains – how do we ensure that AI potential is realized? Legal scholarship has focused on the risks AI poses to society, rather than formulating policy-based incentives for useful applications of the AI knowledge, by private and government entities. In fact, the inability and inaccessibility of data prevent the proliferation of AI technology for disadvantaged groups. Drawing on from experience in other legal fields, such as intellectual property (IP) law, privacy, and corporate law, I aim to shape a new incentive-based legal policy to make AI accessible for all. In order to ensure AI for all, I suggest implanting equality by design, a legal framework which is based upon existing effective mechanisms to incentivize entities to harness AI advantages for the better good. Our framework aims to shatter barriers and internalize the benefits from AI fairly. Our proposal is threefold: Firstly, creating legal safe harbors for entities that employ just processing systems and subject algorithms, predictions and decision-making processes for ongoing scrutiny. This suggestion, employed in copyright and privacy laws, provides much-needed certainty in the post-GDPR environment. Secondly, creating IP/data protection mechanisms to ensure interoperability of data and fair licensing; this category utilizes traditional IP ownership incentives while acknowledging key exceptions for access and compatibility. Thirdly, establishing new corporate social responsibility (CSR) business practices that proliferate data sharing of nonidentifiable information for social purposes. Under our suggested framework, corporations are not only obligated for fair, non-discriminatory, non-intrusive and transparent processing but are incentivized to spread and distribute AI technologies, to share non-identifiable data for research purposes, and to ensure the interoperability of AI systems. Putting into practice new legal and policy measures could have long-lasting results in diminishing inequality and are expected to strengthen social solidarity.