We present an efficient algorithm for the problem of online multiclass prediction with bandit feedback in the fully adversarial setting. We measure its regret with respect to the log-loss defined in (Abernethy and Rakhlin, 2009), which is parameterized by a scalar α. We prove that the regret of Newtron is O(logT) when α is a constant that does not vary with horizon T, and at most O(T2/3) if α is allowed to increase to infinity with T. For α=O(logT), the regret is bounded by O(√T), thus solving the open problem of Kakade et al (2008) and Abernethy and Rakhlin (2009). Our algorithm is based on a novel application of the online Newton method (Hazan et al, 2007). We test our algorithm and show it to perform well in experiments, even when α is a small constant.