Missing Peewee files.

pull/684/head
Louis Vézina 5 years ago
parent 2ddcbf6d7a
commit 0968f66e3f

@ -0,0 +1,48 @@
## Playhouse
The `playhouse` namespace contains numerous extensions to Peewee. These include vendor-specific database extensions, high-level abstractions to simplify working with databases, and tools for low-level database operations and introspection.
### Vendor extensions
* [SQLite extensions](http://docs.peewee-orm.com/en/latest/peewee/sqlite_ext.html)
* Full-text search (FTS3/4/5)
* BM25 ranking algorithm implemented as SQLite C extension, backported to FTS4
* Virtual tables and C extensions
* Closure tables
* JSON extension support
* LSM1 (key/value database) support
* BLOB API
* Online backup API
* [APSW extensions](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#apsw): use Peewee with the powerful [APSW](https://github.com/rogerbinns/apsw) SQLite driver.
* [SQLCipher](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqlcipher-ext): encrypted SQLite databases.
* [SqliteQ](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq): dedicated writer thread for multi-threaded SQLite applications. [More info here](http://charlesleifer.com/blog/multi-threaded-sqlite-without-the-operationalerrors/).
* [Postgresql extensions](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#postgres-ext)
* JSON and JSONB
* HStore
* Arrays
* Server-side cursors
* Full-text search
* [MySQL extensions](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#mysql-ext)
### High-level libraries
* [Extra fields](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#extra-fields)
* Compressed field
* PickleField
* [Shortcuts / helpers](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#shortcuts)
* Model-to-dict serializer
* Dict-to-model deserializer
* [Hybrid attributes](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#hybrid)
* [Signals](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#signals): pre/post-save, pre/post-delete, pre-init.
* [Dataset](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#dataset): high-level API for working with databases popuarlized by the [project of the same name](https://dataset.readthedocs.io/).
* [Key/Value Store](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#kv): key/value store using SQLite. Supports *smart indexing*, for *Pandas*-style queries.
### Database management and framework support
* [pwiz](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#pwiz): generate model code from a pre-existing database.
* [Schema migrations](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#migrate): modify your schema using high-level APIs. Even supports dropping or renaming columns in SQLite.
* [Connection pool](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#pool): simple connection pooling.
* [Reflection](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#reflection): low-level, cross-platform database introspection
* [Database URLs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#db-url): use URLs to connect to database
* [Test utils](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#test-utils): helpers for unit-testing Peewee applications.
* [Flask utils](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#flask-utils): paginated object lists, database connection management, and more.

@ -0,0 +1,73 @@
/* cache.h - definitions for the LRU cache
*
* Copyright (C) 2004-2015 Gerhard Häring <gh@ghaering.de>
*
* This file is part of pysqlite.
*
* This software is provided 'as-is', without any express or implied
* warranty. In no event will the authors be held liable for any damages
* arising from the use of this software.
*
* Permission is granted to anyone to use this software for any purpose,
* including commercial applications, and to alter it and redistribute it
* freely, subject to the following restrictions:
*
* 1. The origin of this software must not be misrepresented; you must not
* claim that you wrote the original software. If you use this software
* in a product, an acknowledgment in the product documentation would be
* appreciated but is not required.
* 2. Altered source versions must be plainly marked as such, and must not be
* misrepresented as being the original software.
* 3. This notice may not be removed or altered from any source distribution.
*/
#ifndef PYSQLITE_CACHE_H
#define PYSQLITE_CACHE_H
#include "Python.h"
/* The LRU cache is implemented as a combination of a doubly-linked with a
* dictionary. The list items are of type 'Node' and the dictionary has the
* nodes as values. */
typedef struct _pysqlite_Node
{
PyObject_HEAD
PyObject* key;
PyObject* data;
long count;
struct _pysqlite_Node* prev;
struct _pysqlite_Node* next;
} pysqlite_Node;
typedef struct
{
PyObject_HEAD
int size;
/* a dictionary mapping keys to Node entries */
PyObject* mapping;
/* the factory callable */
PyObject* factory;
pysqlite_Node* first;
pysqlite_Node* last;
/* if set, decrement the factory function when the Cache is deallocated.
* this is almost always desirable, but not in the pysqlite context */
int decref_factory;
} pysqlite_Cache;
extern PyTypeObject pysqlite_NodeType;
extern PyTypeObject pysqlite_CacheType;
int pysqlite_node_init(pysqlite_Node* self, PyObject* args, PyObject* kwargs);
void pysqlite_node_dealloc(pysqlite_Node* self);
int pysqlite_cache_init(pysqlite_Cache* self, PyObject* args, PyObject* kwargs);
void pysqlite_cache_dealloc(pysqlite_Cache* self);
PyObject* pysqlite_cache_get(pysqlite_Cache* self, PyObject* args);
int pysqlite_cache_setup_types(void);
#endif

@ -0,0 +1,129 @@
/* connection.h - definitions for the connection type
*
* Copyright (C) 2004-2015 Gerhard Häring <gh@ghaering.de>
*
* This file is part of pysqlite.
*
* This software is provided 'as-is', without any express or implied
* warranty. In no event will the authors be held liable for any damages
* arising from the use of this software.
*
* Permission is granted to anyone to use this software for any purpose,
* including commercial applications, and to alter it and redistribute it
* freely, subject to the following restrictions:
*
* 1. The origin of this software must not be misrepresented; you must not
* claim that you wrote the original software. If you use this software
* in a product, an acknowledgment in the product documentation would be
* appreciated but is not required.
* 2. Altered source versions must be plainly marked as such, and must not be
* misrepresented as being the original software.
* 3. This notice may not be removed or altered from any source distribution.
*/
#ifndef PYSQLITE_CONNECTION_H
#define PYSQLITE_CONNECTION_H
#include "Python.h"
#include "pythread.h"
#include "structmember.h"
#include "cache.h"
#include "module.h"
#include "sqlite3.h"
typedef struct
{
PyObject_HEAD
sqlite3* db;
/* the type detection mode. Only 0, PARSE_DECLTYPES, PARSE_COLNAMES or a
* bitwise combination thereof makes sense */
int detect_types;
/* the timeout value in seconds for database locks */
double timeout;
/* for internal use in the timeout handler: when did the timeout handler
* first get called with count=0? */
double timeout_started;
/* None for autocommit, otherwise a PyString with the isolation level */
PyObject* isolation_level;
/* NULL for autocommit, otherwise a string with the BEGIN statement; will be
* freed in connection destructor */
char* begin_statement;
/* 1 if a check should be performed for each API call if the connection is
* used from the same thread it was created in */
int check_same_thread;
int initialized;
/* thread identification of the thread the connection was created in */
long thread_ident;
pysqlite_Cache* statement_cache;
/* Lists of weak references to statements and cursors used within this connection */
PyObject* statements;
PyObject* cursors;
/* Counters for how many statements/cursors were created in the connection. May be
* reset to 0 at certain intervals */
int created_statements;
int created_cursors;
PyObject* row_factory;
/* Determines how bytestrings from SQLite are converted to Python objects:
* - PyUnicode_Type: Python Unicode objects are constructed from UTF-8 bytestrings
* - OptimizedUnicode: Like before, but for ASCII data, only PyStrings are created.
* - PyString_Type: PyStrings are created as-is.
* - Any custom callable: Any object returned from the callable called with the bytestring
* as single parameter.
*/
PyObject* text_factory;
/* remember references to functions/classes used in
* create_function/create/aggregate, use these as dictionary keys, so we
* can keep the total system refcount constant by clearing that dictionary
* in connection_dealloc */
PyObject* function_pinboard;
/* a dictionary of registered collation name => collation callable mappings */
PyObject* collations;
/* Exception objects */
PyObject* Warning;
PyObject* Error;
PyObject* InterfaceError;
PyObject* DatabaseError;
PyObject* DataError;
PyObject* OperationalError;
PyObject* IntegrityError;
PyObject* InternalError;
PyObject* ProgrammingError;
PyObject* NotSupportedError;
} pysqlite_Connection;
extern PyTypeObject pysqlite_ConnectionType;
PyObject* pysqlite_connection_alloc(PyTypeObject* type, int aware);
void pysqlite_connection_dealloc(pysqlite_Connection* self);
PyObject* pysqlite_connection_cursor(pysqlite_Connection* self, PyObject* args, PyObject* kwargs);
PyObject* pysqlite_connection_close(pysqlite_Connection* self, PyObject* args);
PyObject* _pysqlite_connection_begin(pysqlite_Connection* self);
PyObject* pysqlite_connection_commit(pysqlite_Connection* self, PyObject* args);
PyObject* pysqlite_connection_rollback(pysqlite_Connection* self, PyObject* args);
PyObject* pysqlite_connection_new(PyTypeObject* type, PyObject* args, PyObject* kw);
int pysqlite_connection_init(pysqlite_Connection* self, PyObject* args, PyObject* kwargs);
int pysqlite_connection_register_cursor(pysqlite_Connection* connection, PyObject* cursor);
int pysqlite_check_thread(pysqlite_Connection* self);
int pysqlite_check_connection(pysqlite_Connection* con);
int pysqlite_connection_setup_types(void);
#endif

@ -0,0 +1,58 @@
/* module.h - definitions for the module
*
* Copyright (C) 2004-2015 Gerhard Häring <gh@ghaering.de>
*
* This file is part of pysqlite.
*
* This software is provided 'as-is', without any express or implied
* warranty. In no event will the authors be held liable for any damages
* arising from the use of this software.
*
* Permission is granted to anyone to use this software for any purpose,
* including commercial applications, and to alter it and redistribute it
* freely, subject to the following restrictions:
*
* 1. The origin of this software must not be misrepresented; you must not
* claim that you wrote the original software. If you use this software
* in a product, an acknowledgment in the product documentation would be
* appreciated but is not required.
* 2. Altered source versions must be plainly marked as such, and must not be
* misrepresented as being the original software.
* 3. This notice may not be removed or altered from any source distribution.
*/
#ifndef PYSQLITE_MODULE_H
#define PYSQLITE_MODULE_H
#include "Python.h"
#define PYSQLITE_VERSION "2.8.2"
extern PyObject* pysqlite_Error;
extern PyObject* pysqlite_Warning;
extern PyObject* pysqlite_InterfaceError;
extern PyObject* pysqlite_DatabaseError;
extern PyObject* pysqlite_InternalError;
extern PyObject* pysqlite_OperationalError;
extern PyObject* pysqlite_ProgrammingError;
extern PyObject* pysqlite_IntegrityError;
extern PyObject* pysqlite_DataError;
extern PyObject* pysqlite_NotSupportedError;
extern PyObject* pysqlite_OptimizedUnicode;
/* the functions time.time() and time.sleep() */
extern PyObject* time_time;
extern PyObject* time_sleep;
/* A dictionary, mapping colum types (INTEGER, VARCHAR, etc.) to converter
* functions, that convert the SQL value to the appropriate Python value.
* The key is uppercase.
*/
extern PyObject* converters;
extern int _enable_callback_tracebacks;
extern int pysqlite_BaseTypeAdapted;
#define PARSE_DECLTYPES 1
#define PARSE_COLNAMES 2
#endif

File diff suppressed because it is too large Load Diff

@ -0,0 +1,137 @@
import sys
from difflib import SequenceMatcher
from random import randint
IS_PY3K = sys.version_info[0] == 3
# String UDF.
def damerau_levenshtein_dist(s1, s2):
cdef:
int i, j, del_cost, add_cost, sub_cost
int s1_len = len(s1), s2_len = len(s2)
list one_ago, two_ago, current_row
list zeroes = [0] * (s2_len + 1)
if IS_PY3K:
current_row = list(range(1, s2_len + 2))
else:
current_row = range(1, s2_len + 2)
current_row[-1] = 0
one_ago = None
for i in range(s1_len):
two_ago = one_ago
one_ago = current_row
current_row = list(zeroes)
current_row[-1] = i + 1
for j in range(s2_len):
del_cost = one_ago[j] + 1
add_cost = current_row[j - 1] + 1
sub_cost = one_ago[j - 1] + (s1[i] != s2[j])
current_row[j] = min(del_cost, add_cost, sub_cost)
# Handle transpositions.
if (i > 0 and j > 0 and s1[i] == s2[j - 1]
and s1[i-1] == s2[j] and s1[i] != s2[j]):
current_row[j] = min(current_row[j], two_ago[j - 2] + 1)
return current_row[s2_len - 1]
# String UDF.
def levenshtein_dist(a, b):
cdef:
int add, delete, change
int i, j
int n = len(a), m = len(b)
list current, previous
list zeroes
if n > m:
a, b = b, a
n, m = m, n
zeroes = [0] * (m + 1)
if IS_PY3K:
current = list(range(n + 1))
else:
current = range(n + 1)
for i in range(1, m + 1):
previous = current
current = list(zeroes)
current[0] = i
for j in range(1, n + 1):
add = previous[j] + 1
delete = current[j - 1] + 1
change = previous[j - 1]
if a[j - 1] != b[i - 1]:
change +=1
current[j] = min(add, delete, change)
return current[n]
# String UDF.
def str_dist(a, b):
cdef:
int t = 0
for i in SequenceMatcher(None, a, b).get_opcodes():
if i[0] == 'equal':
continue
t = t + max(i[4] - i[3], i[2] - i[1])
return t
# Math Aggregate.
cdef class median(object):
cdef:
int ct
list items
def __init__(self):
self.ct = 0
self.items = []
cdef selectKth(self, int k, int s=0, int e=-1):
cdef:
int idx
if e < 0:
e = len(self.items)
idx = randint(s, e-1)
idx = self.partition_k(idx, s, e)
if idx > k:
return self.selectKth(k, s, idx)
elif idx < k:
return self.selectKth(k, idx + 1, e)
else:
return self.items[idx]
cdef int partition_k(self, int pi, int s, int e):
cdef:
int i, x
val = self.items[pi]
# Swap pivot w/last item.
self.items[e - 1], self.items[pi] = self.items[pi], self.items[e - 1]
x = s
for i in range(s, e):
if self.items[i] < val:
self.items[i], self.items[x] = self.items[x], self.items[i]
x += 1
self.items[x], self.items[e-1] = self.items[e-1], self.items[x]
return x
def step(self, item):
self.items.append(item)
self.ct += 1
def finalize(self):
if self.ct == 0:
return None
elif self.ct < 3:
return self.items[0]
else:
return self.selectKth(self.ct / 2)

@ -0,0 +1,123 @@
from peewee import *
from playhouse.sqlite_ext import JSONField
class BaseChangeLog(Model):
timestamp = DateTimeField(constraints=[SQL('DEFAULT CURRENT_TIMESTAMP')])
action = TextField()
table = TextField()
primary_key = IntegerField()
changes = JSONField()
class ChangeLog(object):
# Model class that will serve as the base for the changelog. This model
# will be subclassed and mapped to your application database.
base_model = BaseChangeLog
# Template for the triggers that handle updating the changelog table.
# table: table name
# action: insert / update / delete
# new_old: NEW or OLD (OLD is for DELETE)
# primary_key: table primary key column name
# column_array: output of build_column_array()
# change_table: changelog table name
template = """CREATE TRIGGER IF NOT EXISTS %(table)s_changes_%(action)s
AFTER %(action)s ON %(table)s
BEGIN
INSERT INTO %(change_table)s
("action", "table", "primary_key", "changes")
SELECT
'%(action)s', '%(table)s', %(new_old)s."%(primary_key)s", "changes"
FROM (
SELECT json_group_object(
col,
json_array("oldval", "newval")) AS "changes"
FROM (
SELECT json_extract(value, '$[0]') as "col",
json_extract(value, '$[1]') as "oldval",
json_extract(value, '$[2]') as "newval"
FROM json_each(json_array(%(column_array)s))
WHERE "oldval" IS NOT "newval"
)
);
END;"""
drop_template = 'DROP TRIGGER IF EXISTS %(table)s_changes_%(action)s'
_actions = ('INSERT', 'UPDATE', 'DELETE')
def __init__(self, db, table_name='changelog'):
self.db = db
self.table_name = table_name
def _build_column_array(self, model, use_old, use_new, skip_fields=None):
# Builds a list of SQL expressions for each field we are tracking. This
# is used as the data source for change tracking in our trigger.
col_array = []
for field in model._meta.sorted_fields:
if field.primary_key:
continue
if skip_fields is not None and field.name in skip_fields:
continue
column = field.column_name
new = 'NULL' if not use_new else 'NEW."%s"' % column
old = 'NULL' if not use_old else 'OLD."%s"' % column
if isinstance(field, JSONField):
# Ensure that values are cast to JSON so that the serialization
# is preserved when calculating the old / new.
if use_old: old = 'json(%s)' % old
if use_new: new = 'json(%s)' % new
col_array.append("json_array('%s', %s, %s)" % (column, old, new))
return ', '.join(col_array)
def trigger_sql(self, model, action, skip_fields=None):
assert action in self._actions
use_old = action != 'INSERT'
use_new = action != 'DELETE'
cols = self._build_column_array(model, use_old, use_new, skip_fields)
return self.template % {
'table': model._meta.table_name,
'action': action,
'new_old': 'NEW' if action != 'DELETE' else 'OLD',
'primary_key': model._meta.primary_key.column_name,
'column_array': cols,
'change_table': self.table_name}
def drop_trigger_sql(self, model, action):
assert action in self._actions
return self.drop_template % {
'table': model._meta.table_name,
'action': action}
@property
def model(self):
if not hasattr(self, '_changelog_model'):
class ChangeLog(self.base_model):
class Meta:
database = self.db
table_name = self.table_name
self._changelog_model = ChangeLog
return self._changelog_model
def install(self, model, skip_fields=None, drop=True, insert=True,
update=True, delete=True, create_table=True):
ChangeLog = self.model
if create_table:
ChangeLog.create_table()
actions = list(zip((insert, update, delete), self._actions))
if drop:
for _, action in actions:
self.db.execute_sql(self.drop_trigger_sql(model, action))
for enabled, action in actions:
if enabled:
sql = self.trigger_sql(model, action, skip_fields)
self.db.execute_sql(sql)
Loading…
Cancel
Save