As smart technologies such as artificial intelligence (AI), automation and Internet of Things (IoT) are increasingly embedded into commercial and government services, we are faced with new challenges in digital inclusion to ensure that existing inequalities are not reinforced and new gaps that are created can be addressed. Digital exclusion is often compounded by existing social disadvantage, and new systems run the risk of creating new barriers and harms. Adopting a case study approach, this paper examines the exclusionary practices embedded in the design and implementation of social welfare services in Australia. We examined Centrelink’s automated Online Compliance Intervention system (‘Robodebt’) and the National Disability Insurance Agency’s intelligent avatar interface ‘Nadia’. The two cases show how the introduction of automated systems can reinforce the punitive policies of an existing service regime at the design stage and how innovative AI systems that have the potential to enhance user participation and inclusion can be hindered at implementation so that digital benefits are left unrealised.