Fraudulent academic journals are growing

Gina Kolata in the NY Times has been running a good series of articles on fraudulent academic publishing. The basic business model is an unholy alliance between academics looking to enhance their resumes, and quick-buck internet sites. Initially, I thought these sites were enticing naive academics. But many academics are apparently willing participants, suggesting that it’s  easy to fool many promotion and award committees.

All but one academic in 10 who won a School of Business and Economics award had published papers in these journals. One had 10 such articles.

Continue reading

Advertisements

What snakes are growing in the Gardens of Technological Eden?

Two emerging technologies are revolutionizing industries, and will soon have big impacts on our health, jobs, entertainment, and entire lives. They are Artificial Intelligence, and Big Data. Of course, these have already had big effects in certain applications, but I expect that they will become even more important as they improve. My colleague Dr. James Short is putting together a conference called Data West at the San Diego Supercomputer Center, and I came up with a list of fears that might disrupt their emergence.

1) If we continue to learn that ALL large data repositories will be hacked from time to time (Experian; National Security Agency), what blowback will that create against data collection? Perhaps none in the US, but in some other countries, it will cause less willingness to allow companies to collect consumer data.

2) Consensual reality is unraveling, mainly as a result of deliberate, sophisticated, distributed, attacks. That should concern all of us as citizens. Should it also worry us as data users, or will chaos in public venues not leak over into formal data? For example, if information portals (YouTube, Facebook, etc.) are forced to take a more active role in censoring content, will advertisers care? Again, Europe may be very different. We can presume that any countermeasures will only be partly effective – the problem probably does not have a good technical solution.

3) Malware, extortion, etc. aimed at companies. Will this “poison the well” in general?

4) Malware, extortion, doxing, etc. aimed at Internet of Things users, such as household thermostats, security cameras, cars. Will this cause a backlash against sellers of these systems, or will people accept it as the “new normal.” So far, people have seemed willing to bet that it won’t affect them personally, but will that change. For example, what will happen when auto accidents are caused by deliberate but unknown parties who advertise their success? When someone records all conversations within reach of the Alexa box in the living room?

Each of these scenarios has at least a 20% chance of becoming common. At a minimum, they will require more spending on defenses. Will any become large enough to suppress entire applications of these new technologies?

I have not said anything about employment and income distribution. They may change for the worse over the next 20 years, but the causes and solutions won’t be simple, and I doubt that political pressure will become strong enough to alter technology evolution.

Photovoltaics in Mission Bay neighborhood = 30% wasted

TL;DR In Southern California should put PV on houses and buildings that are far from the coast, because coastal areas are cloudy much of the summer. But the actual pattern is the opposite. I estimate a 30% magnitude of loss. Even my employer, UCSD, has engaged in this foolishness in order to appear trendy.

The bumpiness of this graph shows the effects of coastal weather in August.

Continue reading

Showing linear regression coefficients

I have just finished my Big Data course for 2017, and noted some concepts that I want to teach better next year. One of them is how to interpret and use the coefficient estimates from linear regression. All economists are familiar with dense tables of coefficients and standard errors, but they require experience to read, and are not at all intuitive. Here is a more intuitive and useful way to display the same information. The blue dots show the coefficient estimates, while the lines show +/- 2 standard errors on the coefficients. It’s easy to see that the first two coefficients are “statistically significant at the 5% level”, the third one is not, and so on. More important, the figure gives a clear view

Coef plot from strengejacke Bof the relative importance of different variables in determining the final outcomes.

The heavy lifting for this plot is done by the function sjp.lm from the sjPlot library. The main argument linreg is the standard results of a linear regression model, which is a complex list with all kinds of information buried in it.  Continue reading

Recent stories on AI, automation, and the future of work

======================

Melinda Gates and Fei-Fei Li Want to Liberate AI from “Guys With Hoodies”

Who designs software makes a big difference. And Silicon Valley employees are not a cross-section of anything, except each other. Nor need they be; but some balance is needed to make sure products are designed to help diverse people.

As a technologist, I see how AI and the fourth industrial revolution will impact every aspect of people’s lives. If you look at what AI is doing at amazing tech companies like Microsoft, Google, and other companies, it’s increasingly exciting.

But in the meantime, as an educator, as a woman, as a woman of color, as a mother, I’m increasingly worried. AI is about to make the biggest changes to humanity and we’re missing a whole generation of diverse technologists and leaders.  Source.

For one reason this problem is growing right now, see the next story: oligopoly control of AI applications in our lives.

=============================

Another case of the “Big 5” grabbing new AI-related technology before it becomes public.

Apple acquires AI company Lattice Data, a specialist in unstructured ‘dark data’, for $200M

The strength of this pattern, where the Big 5 (Apple, Amazon, Google, Microsoft, Facebook) buy out each novel tech idea and hide it in-house,  as anti-competitive and bad for society as a whole. Apple, because of its level of secrecy, may be worse than some of the others. In a competitive world such purchases would not be a big problem – let the market figure it out. But with the huge cash levels of these companies, which itself indicates monopoly power, they can effectively stifle new ideas that might threaten them in the long run.

  ==========================

Amazon’s new age grocery likely wasn’t technically possible even five years ago.

How Amazon Go (probably) makes “just walk out” groceries a reality | Ars Technica

=========================

Big data and AI are not “objective”

AI, machine learning, etc only appear to be objective. In reality, they reflect the world view and prejudices of their developers.

 Algorithms have been empowered to make decisions and take actions for the sake of efficiency and speed…. the aura of objectivity and infallibility cultures tend to ascribe to them. . the shortcomings of algorithmic decisionmaking, identifies key themes around the problem of algorithmic errors and bias, and examines some approaches for combating these problems. This report highlights the added risks and complexities inherent in the use of algorithmic … decisionmaking in public policy. The report ends with a survey of approaches for combating these problems.

Source: An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence | RAND