Scrivener 3 for Academic Writing: An In-Depth Review

I use a variety of specialized software for note taking, managing academic papers, etc. Rather than write my own review of Scrivener, I link to someone else’s here. I added a comment to her post, about using it with bibliography software.

Feel free to add links to your own favorite Scrivener reviews, in the comments. There is a fair amount of overhead in learning Scrivener, but for longer projects (eg > 10,000 words) it saves writers from “multiple version hell.”

In this in-depth Scrivener 3 review, I show you why Scrivener is the best word processor for academic writing. Unlike Word, Scrivener 3 keeps all your research and writing in one place. Its best features? The ability to drag and drop to reorganize your draft, split screen mode, word targets, and linguistic focus.

Source: Scrivener 3 for Academic Writing: An In-Depth Review

Rescuing a medical treatment from failure in a clinical trial by using  Post Hoc Bayesian Analysis 

How can researchers maximize learning from experiments, especially from very expensive experiments such as clinical trials? This article shows how a Bayesian analysis of the data would have been much more informative, and likely would have saved a useful new technique for dealing with ARDS.

I am a big supporter of Bayesian methods, which will become even more important/useful with machine learning. But a colleague, Dr. Nick Eubank, pointed out that the data could also have been re-analyzed using frequentist statistics. The problem with the original analysis was not primarily that they used frequentist statistics. Rather, it was that they set a fixed (and rather large) threshold for defining success. This threshold was probably unattainable. But the clinical trial could still have been “saved,” even by conventional statistics.

Source: Extracorporeal Membrane Oxygenation for Severe Acute Respiratory Distress Syndrome and Posterior Probability of Mortality Benefit in a Post Hoc Bayesian Analysis of a Randomized Clinical Trial. | Critical Care Medicine | JAMA | JAMA Network

Here is a draft of a letter to the editor on this subject. Apologies for the very academic tone – that’s what we do for academic journals!

The study analyzed in their article was shut down prematurely due to the unlikelihood that it would attain the target level of performance. Their paper shows that this might have been avoided, and the technique shown to have benefit, if their analysis had been performed before terminating the trial. A related analysis could usefully have been done within the frequentist statistical framework. According to their Table 2, a frequentist analysis (equivalent to an uninformative prior) would have suggested a 96% chance that the treatment was beneficial, and an 85% chance that it had RR < .9 .

The reason the original study appeared to be failing was not solely that it was analyzed with frequentist methods. It also failed because the target threshold for “success” was set at a high threshold, namely RR < .67. Thus, although the full Bayesian analysis of the article was more informative, even frequentist statistics can be useful to investigate the implications of different definitions of success.

Credit for this observation goes to Nick. I will ask him for permission to include one of his emails to me on this subject.

Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

Smaller departments that struggle with the cost of equipment and storage of data are ending or suspending programs aimed at transparency and accountability.

Source: Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

My comment: this was predictable. Video data gets big very quickly. See my discussion 3 years ago.

SDG&E adopts residential Time of Use pricing – only 30 years late!

This picture looks exciting, doesn’t it? But the vertical axis is not to scale. In fact the price changes are so small that they are barely visible. See the next figure. For 7 months a year, my prices will vary by only $.02/kWh over a week!

https://www.sdge.com/sites/default/files/TOU_summer.jpg

Source: When you use energy matters | San Diego Gas & Electric

Our local utility, San Diego Gas & Electric, just sent us a notice that we will be switching to Time of Use (TOU) pricing. I have no objections, BUT:

  1. TOU was innovative in the 1980s. But for any house with a smart meter, which we all have now, it has been dominated by real-time related pricing for at least 20 years.
  2. The price differentials are negligible – 1 cent per kwh, or about 3 percent! In the winter almost nobody will adjust their usage, or even keep track of it.  At least differences in the summer are substantially larger – as much as 30¢ per kwh.

Continue reading

does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism? – Quora

Original question on Quora: Why does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism?

My response: Trying to pin this on Michael Porter is ridiculous. He says no such thing. Based on the way the question is phrased, I wonder if there is an ideological purpose in asking it.

But in any case, there is a serious issue behind the question, namely an increasing level of oligopoly (decreasing levels of competition) among companies in many US industries. See, for example, “Big Companies Are Getting a Chokehold on the Economy Even Goldman Sachs is worried that they’re stifling competition, holding down wages and weighing on growth.”  or.

“America Has a Monopoly Problem—and It’s Huge”.

One theory about this trend is that it is partly due to growing power of corporations in Washington. That, in turn, may be traced partly to the increasing role of money in elections, largely as a result of the infamous Supreme Court “Citizens United” decision. For example, the way Trump’s massive tax cuts were put together without any hearings and in a VERY short period of time, and the amount of “goodies” for many industries in the resulting package, would never have happened with previous massive changes in taxes.

An effective strategy in some highly concentrated industries is to persuade the government to selectively regulate your industry, in ways that favor large and established companies. That is, all companies may experience higher costs because of a regulation, but if your company can respond more cheaply than anyone else, it is still a net win for you. An example is pharmaceuticals. For example pharma companies increasingly use the legal system, regulations, and side deals to keep generic drugs off the market for years after drug patents expire. The industry has also been very effective at keeping foreign competitors out – e.g. blocking imports by individual citizens from Canada.

(I buy one medication at $1 per pill from abroad, when it costs $30/pill at the local Rite-Aid. But it takes a lot of research and effort.)

Source: (32) Why does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism? – Quora

Would ‘explainable AI’ force companies to give away too much? Not really.

Here is an argument for allowing companies to maintain a lot of secrecy about how their data mining (AI) models work. The claime is that revealing information will put  companies at a competitive disadvantage. Sorry, that is not enough of a reason. And it’s not actually true, as far as I can tell.

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Because data is the foundation for all AI, it is valid to want to know where the data…

Source: The problem with ‘explainable AI’ | TechCrunch

Here is my response.

Your questions are good ones. But you seem to think that explainability cannot be achieved except by giving away all the work that led to the AI system. That is a straw man. Take deep systems, for example. The IP includes:
1) The training set of data
2) The core architecture of the network (number of layers etc)
3) The training procedures over time, including all the testing and tuning that went on.
4) The resulting system (weights, filters, transforms, etc).
5) HIgher-level “explanations,” whatever those may be. (For me, these might be a reduced-form model that is approximately linear, and can be interpreted.)

Revealing even #4 would be somewhat useful to competitors, but not decisive. The original developers will be able to update and refine their model, while people with only #4 will not. The same for any of the other elements.

I suspect the main fear about revealing this, at least among for-profit companies, is that it opens them up to second-guessing . For example, what do you want to bet that the systems now being used to suggest recidivism have bugs? Someone with enough expertise and $ might be able to make intelligent guesses about bugs, although I don’t see how they could prove them.
Sure, such criticism would make companies more cautious, and cost them money. And big companies might be better able to hide behind layers of lawyers and obfuscation. But those hypothetical problems are quite a distance in the future. Society deserves to, and should, do more to figure out where these systems have problems. Let’s allow some experiments, and even some different laws in different jurisdictions, to go forward for a few years. To prevent this is just trusting the self-appointed experts to do what is in everyone else’s best interests. We know that works poorly!