In the past two decades, numerous authors have investigated the question of responsibility for computer errors. One popular line of argument is that it is inappropriate to hold the computer responsible because, in addition to interfering with the identification of the human beings who are responsible for such errors, computers simply are not responsible beings. Because of their placement in three roles to which we traditionally look when assigning blame-advisor, decision-maker, and actor-I suggest that we should reconsider our aversion to blaming computers and seriously investigate whether it is possible to build responsible computers. I begin by arguing that the metaphor of computer-as-tool hinders our ability to even conceive of the possibility of building responsible computers, and suggest a shift to the metaphor of computer-as-child. Following, I present a brief account of moral responsibility in terms of the ability to respond to one’s environment and one’s peers. Drawing on this account I address objections to the possibility of responsible computers involving autonomy (suggesting an account of autonomy as competence rather than choice), freedom (suggesting an alternative interpretation of ‘could have done otherwise’), and intentionality (addressing intentionality as both meaning and purpose). In conclusion, I suggest that a computer endowed with learning algorithms and communication skills could, if designed correctly, be morally responsible.