Machine learning algorithms present substantial promise for administrative agencies, suggesting more effective and accurate decision-making in a verity of settings, from welfare payments decisions to ascertaining the risk from repeat offenders. However, the use of machine learning algorithms also raises difficulties, central among which stems from the fact that they are inscrutable, i.e., they generate predictions that are beyond the understanding of human cognition. This trait is in tension with the emphasis on reason-giving in administrative law. The Article explores this tension and advances two inter-related arguments. First, incorporating inscrutable machine learning predictions into the administrative process is not without a normative cost, specifically for individuals who are affected by decisions they do not understand. This is because providing adequate reasons for administrative decisions is an important facet of treating individuals as subjects with independent rational capacities, which inscrutable algorithms-based decisions may compromise. Second, as a long-term concern, inscrutability may generate a systemic effect by gradually receding the realm of the humanly explainable in public life, a phenomenon termed “re-enchantment" Max Weber.