Autonomous shepherding is a bio-inspired swarm guidance approach, whereby an artificial sheepdog guides a swarm of artificial or biological agents, such as sheep, towards a goal. While the success in this guidance depends on the set of behaviours exhibited by the sheepdog, the main source of complexity for learning effective behaviours lies within the highly non-linear dynamics featured among the swarm members as well as between the swarm and the sheepdog. Attempts to apply reinforcement learning (RL) to shepherding have so far relied greatly on rule-based algorithms for calculating waypoints to guide the RL algorithm. In this paper, we propose a curriculum-based approach for RL that does not rely on any external algorithm to pre-determine waypoints for the sheepdog. Instead, the approach uses task decomposition by formulating shepherding in terms of two sub-tasks: (1) pushing an agent from a start to a target location and (2) selecting between collecting scattered agents or driving the biggest cluster of agents to the goal. Simple-to-complex curriculum learning is used to accelerate the learning of each sub-task. For the first sub-task, the complexity is gradually increased over training time, whereas for the second sub-task a simplified environment is designed for initial learning before proceeding with the main environment. The proposed approach results in high-performance shepherding with a success rate of about 96%. While curriculum learning was found to expedite the learning of the first sub-task, it was not as efficient for the second sub-task. Our analyses highlight the need for the careful design of the curriculum to ensure that skills acquired in intermediate tasks are useful for the main tasks.