I have read #155 and #172 in some detail and I believe I have a problem similar to #155.
Constraint error: (201) Duplicate keys are not allowed, index: monos.pkMonos, key: "session"
Unfortunately I am not able to reproduce in a simple example I have tried to draw up for you.
I note this comment in #172:
The fact that you are using insertOrReplace()
means that if you insert a row with the same PK, it should replace an existing row instead of complaining. So, yes, that would be a bug if it is not happening.
...so it does appear that this is a bug. My schema looks a bit like this:
schemaBuilder = lf.schema.create('myDatabase', 1);
schemaBuilder.createTable('fw_mime_types')
.addColumn('id', lf.Type.INTEGER)
.addColumn('mime_type', lf.Type.STRING)
.addColumn('file_extension', lf.Type.STRING)
.addColumn('popular', lf.Type.INTEGER)
.addColumn('created_timestamp', lf.Type.INTEGER)
.addColumn('modified_timestamp', lf.Type.STRING)
.addColumn('modified_session_link', lf.Type.INTEGER)
.addNullable(['mime_type','file_extension','popular','created_timestamp','modified_timestamp','modified_session_link'])
.addPrimaryKey(['id'])
;
schemaBuilder.createTable('mono')
.addColumn('key', lf.Type.STRING)
.addColumn('value', lf.Type.NUMBER)
.addPrimaryKey(['key'])
;
schemaBuilder.createTable('monos')
.addColumn('key', lf.Type.STRING)
.addColumn('value', lf.Type.STRING)
.addColumn('created_timestamp', lf.Type.INTEGER)
.addColumn('modified_timestamp', lf.Type.INTEGER)
.addColumn('modified_session_link', lf.Type.INTEGER)
.addNullable(['value','created_timestamp','modified_timestamp','modified_session_link'])
.addPrimaryKey(['key'])
;
schemaBuilder
.connect()
.then(function(newDb){
db=newDb;
db_mono=db.getSchema().table('mono');
db_monos=db.getSchema().table('monos');
})
..though I have about 10 tables in total with greater complexity.
The main similarity to #155 is that I am using a STRING as a key, and not an INTEGER as most people would.
The error occurs when I try to insertOrReplace a second time, using the key, which in this case is "session".
There are also a few other reads and writes to and from the same table(and various other tables) in between insertOrReplaces on that key.
If I change the hard-coded key name after the first time the database is created to another string, the error no longer occurs, no matter how many times I reload the app in the browser. That is, if that key is written and the read routine continues to read the row with the original key.
If I delete the database from the console and reload, the error occurs on the second reload or any subsequent insertOrReplace on that key.
If I use a numerical string "0" as the key, the error does not occur.
If I replace the hard-coded occurrences of "session"
with a variable that contains "session" in the line newData[0]["key"]="session";
, the error also does not occur.
function dbStoreArray(tableName,index,array){
var newData=this[tableName];
var row=[];
var newRows=[];
if (tableName=="session"){
tableName="monos";
newData={};
newData[0]={};
newData[0]["key"]="session";
myData=this["session"]["000"];
newData[0]["value"]=JSON.stringify(myData);
}
var table=db.getSchema().table(tableName);
var keys=Object.keys(newData);
var l=keys.length;
for (var n = 0; n < l; n++) {
var currentRow=newData[keys[n]];
row=table.createRow(currentRow);
newRows.push(row);
}
db
.insertOrReplace()
.into(table)
.values(newRows)
.exec()
.then(function(){
console.log("insert success");
})
.catch(function(error){
console.log("fail",error);
});
newRows=[];
}
This is the function that is called to store the data, which parses over a Javascript object of 8 sets of data(objects) passed from the server, using jQuery.ajax like this:
$.ajax({
method: "GET",
url:"dbSync.php",
data:{"l": latest}
})
.done (function(response){
var result=JSON.parse(response);
Object.keys(result).forEach(dbStoreArray,result);
})
The first 7 of the 8 sets are stored just fine in the database, and the last one which contains the session data is not. The session data is added from the server using a slightly different method to the other 7 but I have verified that the data format / JSON format passed from the server and into the database is consistent at every stage.
I am testing on Chrome for Linux version 71.0.3578.98 (Official Build) (64-bit) on Linux Mint 19.1
I hope that gives you enough to go on.