Sort:  

Yes, some things ChatGPT is able to summarize quite well.

However, in other areas, it provides complete inaccuracies with absolute 'confidence'.

For example, when I asked it to give a summary of a book chapter I am requiring my students to read, it could not even get the title of the chapter correct, much less the summary of its contents.

Also, it 'confidently' gives completely erroneous quotes (i.e. it simply invents 'quotes' that are not contained in the source document, complete with fictitious page numbers -- I've even had it cite page numbers that do not exist in the source document).

It is good to know of such limitations.

GPT-4 was just demoed today.

Supposedly it is considerably more powerful. Whether or not it makes fewer mistakes like that is unknown at this point.

I just tested it earlier and it gave me one inaccurate reply.