'Hard takeoff' is a term used in futurism to refer to a sudden and probably unexpected singularity event (in the transhumanism sense, not a black hole). The most likely cause of this would be an AI that develops from human or near-human intelligence to superintelligence in a very short time period -- a matter of months, weeks, or days. This is in contrast to a slow progression, AKA a soft takeoff, which is traditionally assumed to be more probable.

The probability of a hard takeoff is hard to judge, but in theory it should be almost inevitable. Once an AI of human intelligence has been developed, it is a simple matter to give it more processors (faster than any human!), a memory upgrade (photographic memory!), access to the internet (more knowledge immediately available than any human-level intelligence has ever had before!), a scientific calculator (1337 math skills!), etc. And just like that you've moved from human to superhuman.

The next step would be unpredictable. Being way better than human is not the same as being all-powerful, or even being functional in the real world. But it is possible that a smart, fast, goal-directed AI with access to all human knowledge would very quickly become the world's leading expert on computer programming, and start to improve itself. Bootstraping from software to hardware upgrades might be a significant challenge, but improved software can do a lot even without hardware upgrades, and in biological systems we've seen very small changes make very big differences -- after all, a chimpanzee's brain and a human's brain are not so very different in evolutionary terms, or in evolutionary timescale.

Improved software is arguably less important than the access to information already made available to human level intelligences. While current AIs cannot readily interpret information proved in natural language and have extremely limited ability to plan a course of study, we do have a tremendous amount of knowledge available on the internet, from Wikipedia to Khan Academy to JSTOR to USPTO. Once this information becomes accessible to a being with a sense of agency, the ability to comprehend and synthesize information, and super speed and memory, science may make a few sudden jumps forward, mostly likely starting with computer science. At that point, AI research will not proceed at the speed of human research and development, but at superhuman speeds.

Regardless of whether you see this as an existential threat or merely an interesting feature of computer-based intelligence, it does suggest that if we do want to program any controls or safeties into a super-intelligent AI, the only safe time to do it is before it reaches near-human levels of intelligence.

It is worth noting that this is only the tip of the debate. Scott Alexander has argued, with some reason, that the time to start on serious AI control is before they reach the cognitive level of rats. The theory behind this is that the differences between rat brains and human brains aren't massive changes in overall complexity or organization, but rather the scale at which the brain operates. If there's one thing that computers can do well, it's scale up. In the lack of any clear evidence of where the watershed level of complexity is, it is wise to err on the side of caution.