I have severe doubts over how soon "strong AI" in the sense of a conscious machine is going to arrive. We're too quick to assign "motivations" to machines and assume they follow the same self-preservation, sex, power etc. that humans do, so even if they were very clever, they might not express it in the ways we expect. The idea that there will be this sudden leap to sentience could be exaggerated, too. It took us tens of thousands of years and required a lot of interaction/learning/evolution from the external environment to achieve. Similarly I think even if we matched the human brain in a robot it would still have to go through the equivalent of a "child rearing" stage to learn about other people's expectations and so on, to be useful. Once it was done we could copy it to other robots but it's still a long term project. Just think of the complexities of having a conversation where you explain a simple concept to someone: you have to have a model of what the other person already knows, partly from experience and partly guessed from context; then you have to adapt your explanation of the concepts involved to what they know, meanwhile making sure that your tone of voice is neither patronising, threatening, aloof and reacting to their cues of boredom, confusion, impatience to talk etc. as and when they occur. That's a LOT of stuff, it's not 10000 lines of C ;) That is also a long-winded way of saying that the human brain does not appear to be a homogenous neural net (as some AI people initially supposed), it's a complex combination of special purpose ones, so it's not enough to just put in the goal condition and let a generic neural net run - I've tried things like this and certain complex tasks are done badly. That's probably why we have special brain circuits for navigation/sense of space, for instance.
The more useful (and totally different) branch of this in my lifetime could be the weak AI robots (like HRP-4C). Now that they've got the appearance of a human to some rudimentary level and the fact that speech recognition and synthesis is readily available, the difficult part - actually holding a conversation with the robot - reduces to the same task computer games programmers find themselves faced with when they add conversation to the virtual characters. What I've noticed is that some of these attempts fail and it feels like you're nurse the thing along to get any response at all, but other games give much more of a feeling of interaction. The key seems to be restricting the conversation to a very specific subject or scenario, within which the AI can sound very knowledgable (often this is done by restricting what inputs the player can give). The "breadth" is what is difficult, perhaps some sort of Wikipedia of interlocking specialist AIs could try to cover the whole ground but I bet the holes would always show through in any conversation involving insight in the abstract sense.
No, what these new androids would be good at are jobs like: flight attendant, waitress, hotel check-in desk, etc. but usually backed up by at least one human supervisor for any "complicated" cases (these often get sent to the line manager even with humans). Cleaner would be a useful one but even that "menial" job requires quite a lot of complex AI - recognising different sorts of object, whether dirt is sticky or dry and what technique to use and so on. I think probably AIs are better at flying planes than they are cleaning a house. The skill set would increase over time, but the idea of achieving "general" intelligence is a very long way off technically.
There is another angle on this one. The difficulty in producing strong AI may relate to the improbability of complex or sentient life existing in the first place, particularly the "squeeze points" in the latter stages - e.g. why did hominid brains undergo a runaway increase in complexity?
What we don't know is what I refer to as the "improbability budget" of us existing. Many scientists have taken the Copernican principle a little too far and assumed that there was so little special about our place in the universe that there should be intelligent life on every 12th star system. Then they wondered why there was no evidence of signals or spacecraft from nearby super-civilisations.
What most people don't seem to realise is that the naive Copernican principle is violated in a number of senses. The volume of the Earth's habitable atmosphere is only about 10^-21 (one sextillionth... ugh) of the volume of the solar system, yet philosophers mostly don't spend their time wondering why we aren't floating around in some nondescript bit of space beyond Uranus.
The reason is that there is a structural principle (reproduction) that ensures we are born near our parents and hence survivable conditions. Evolution is much the same. It telescopes the improbabilities down but it does not necessarily eliminate them. There was at the outset the improbability of our great^trillion grandmother proto-cell existing in the right place at the right time to start the whole thing off, both in terms of location but more severely, configuration. Then evolution explains some things better than others. Here's where we enter some interesting waters where creationists' arguments point in an informative direction (but of course not the one they think - who is this "god" chap anyway?) We can see the evolution sequence of things like the eye and other complex systems used as evidence for intellgent design, some scientists have concluded that because we can see it happening, it must be a result only of evolution. But it's not that simple. It could be a mixture of evolution and improbability - and we don't yet know how much improbability has been mixed into the recipe because we don't know the distance between us and the nearest comparable civilisation (which is asymptotically proportional to the cube root of the improbability in a large-scale homogenous spatially flat universe). So, this eye-evolution thing could be like watching the sequence of events that led up to Bill Gates as opposed to a working class person - lots of luck. Luck that is massively reduced by evolution being in there, but still luck. Another interesting example cited by creationists is the squeeze point at producing an RNA/protein system in the first place, some people are talking about 10^-50 or 10^-200 types of figure there.
This relates to AI because the hominid evolution was rather peculiar, certainly seems to be the first such event that's happened on Earth. Animals on the other hand come in all sorts of variants (almost everything possible) so you can assume that once complex life at that level existed, the squeeze was over and many other possibilities could be explored at similar levels of likelihood. If we have to navigate that same "difficult bit" with technology, we may find it doesn't just do itself but requires some very particular preconditions.