Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 4 new columns ({'repo', 'stars', 'creation_date', 'file_path'})

This happened while the json dataset builder was generating data using

hf://datasets/Sheerio/SynPrune-Python/raw/negative/negative_raw.jsonl (at revision 24e9bb76cc540cca010a7b25ce76c3d90a74834f)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              function: string
              creation_date: timestamp[s]
              repo: string
              file_path: string
              stars: int64
              label: int64
              to
              {'function': Value('string'), 'label': Value('int64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1339, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 972, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 894, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 970, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 4 new columns ({'repo', 'stars', 'creation_date', 'file_path'})
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/Sheerio/SynPrune-Python/raw/negative/negative_raw.jsonl (at revision 24e9bb76cc540cca010a7b25ce76c3d90a74834f)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

function
string
label
int64
def SimpleSector(sides=0, radius=1.0, startangle=0.0, endangle=45.0): newpoints = [] startangle = radians(startangle) endangle = radians(endangle) sides += 1 newpoints.append([0, 0, 0]) angle = (endangle - startangle) / sides x = cos(startangle) * radius y = sin(startangle) * radius newpoints.append([x, y, 0]) j = 1 while j < sides: t = angle * j x = cos(t + startangle) * radius y = sin(t + startangle) * radius newpoints.append([x, y, 0]) j += 1 x = cos(endangle) * radius y = sin(endangle) * radius newpoints.append([x, y, 0]) return newpoints
1
def requires_cuda_not_available() -> pytest.MarkDecorator: return pytest.mark.skipif(torch.cuda.is_available(), reason="CUDA is available")
0
def _convert_train_id_to_eval_id(prediction, train_id_to_eval_id): """Converts the predicted label for evaluation. There are cases where the training labels are not equal to the evaluation labels. This function is used to perform the conversion so that we could evaluate the results on the evaluation server. Args: prediction: Semantic segmentation prediction. train_id_to_eval_id: A list mapping from train id to evaluation id. Returns: Semantic segmentation prediction whose labels have been changed. """ converted_prediction = prediction.copy() for train_id, eval_id in enumerate(train_id_to_eval_id): converted_prediction[prediction == train_id] = eval_id return converted_prediction
1
def set_response(self, response: ChatCompletion): """ Set the mock to return a specific response. :param response: A ChatCompletion response to return. """ self.chat.completions.create.return_value = response
0
async def run_local_search( query: str, sv: SessionVariables, ) -> SearchResult: """Run local search.""" print(f"Local search query: {query}") # noqa T201 # build local search engine response_placeholder = st.session_state[ f"{SearchType.Local.value.lower()}_response_placeholder" ] response_container = st.session_state[f"{SearchType.Local.value.lower()}_container"] with response_placeholder, st.spinner("Generating answer using local search..."): empty_context_data: dict[str, pd.DataFrame] = {} response, context_data = await api.local_search( config=sv.graphrag_config.value, communities=sv.communities.value, entities=sv.entities.value, community_reports=sv.community_reports.value, text_units=sv.text_units.value, relationships=sv.relationships.value, covariates=sv.covariates.value, community_level=sv.dataset_config.value.community_level, response_type="Multiple Paragraphs", query=query, ) print(f"Local Response: {response}") # noqa T201 print(f"Context data: {context_data}") # noqa T201 # display response and reference context to UI search_result = SearchResult( search_type=SearchType.Local, response=str(response), context=context_data if isinstance(context_data, dict) else empty_context_data, ) display_search_result( container=response_container, result=search_result, stats=None ) if "response_lengths" not in st.session_state: st.session_state.response_lengths = [] st.session_state["response_lengths"].append({ "result": search_result, "search": SearchType.Local.value.lower(), }) return search_result
0
def _format_vat_cl(self, values): identification_types = [self.env.ref('l10n_latam_base.it_vat').id, self.env.ref('l10n_cl.it_RUT').id, self.env.ref('l10n_cl.it_RUN').id] partner_country_is_chile = (values.get('country_id') == self.env.ref('base.cl').id) or ( values.get('l10n_latam_identification_type_id') and self.env['l10n_latam.identification.type'].browse( values.get('l10n_latam_identification_type_id')).country_id == self.env.ref('base.cl')) if partner_country_is_chile and \ values.get('l10n_latam_identification_type_id') in identification_types and values.get('vat'): return stdnum.util.get_cc_module('cl', 'vat').format(values['vat']).replace('.', '').replace( 'CL', '').upper() else: return values['vat']
1
def _server_loop(self): """Main server loop in a separate thread""" print("Server thread started") self.socket.settimeout(1.0) # Timeout to allow for stopping while self.running: try: # Accept new connection try: client, address = self.socket.accept() print(f"Connected to client: {address}") # Handle client in a separate thread client_thread = threading.Thread( target=self._handle_client, args=(client,) ) client_thread.daemon = True client_thread.start() except socket.timeout: # Just check running condition continue except Exception as e: print(f"Error accepting connection: {str(e)}") time.sleep(0.5) except Exception as e: print(f"Error in server loop: {str(e)}") if not self.running: break time.sleep(0.5) print("Server thread stopped")
0
def log_loss(name, loss, loss_dict, use_video): # only calculate loss for video if use_video == 0: loss.data = torch.tensor(0.0, device=device, dtype=dtype) all_reduce_sum(loss.data) num_video = torch.tensor(use_video, device=device, dtype=dtype) all_reduce_sum(num_video) loss_item = loss.item() / num_video.item() loss_dict[name] = loss_item running_loss[name] += loss_item
0
def init_bigvgan(): global bigvgan_model, hifigan_model, sv_cn_model from BigVGAN import bigvgan bigvgan_model = bigvgan.BigVGAN.from_pretrained( "%s/GPT_SoVITS/pretrained_models/models--nvidia--bigvgan_v2_24khz_100band_256x" % (now_dir,), use_cuda_kernel=False, ) # if True, RuntimeError: Ninja is required to load C++ extensions # remove weight norm in the model and set to eval mode bigvgan_model.remove_weight_norm() bigvgan_model = bigvgan_model.eval() if is_half == True: bigvgan_model = bigvgan_model.half().to(device) else: bigvgan_model = bigvgan_model.to(device)
0
def test_get_reward_funcs(self): """Test get_reward_funcs with various reward functions.""" reward_names = [ "accuracy", "format", "reasoning_steps", "cosine", "repetition_penalty", "length", "tag_count", "code", "ioi_code", "code_format", "binary_code", ] reward_func_names = [ "accuracy_reward", "format_reward", "reasoning_steps_reward", "cosine_scaled_reward", "repetition_penalty_reward", "len_reward", "tag_count_reward", "code_reward", "ioi_code_reward", "code_format_reward", "binary_code_reward", ] args = GRPOScriptArguments( dataset_name="dummy", reward_funcs=reward_names, ) reward_funcs = get_reward_funcs(args) self.assertEqual(len(reward_funcs), 11) for func_name, func in zip(reward_func_names, reward_funcs): self.assertEqual(func_name, func.__name__)
0
def test_parse_hparam_args__equals(): hparam_args = ['--foo=HParams(boo=1)'] assert parse_hparam_args(hparam_args) == {'foo': HParams(boo=1)}
1
def file_exists(file_path: str) -> bool: """ Check if a file exists. Args: file_path (str): The path to the file. Returns: bool: True if the file exists, False otherwise. """ return os.path.exists(file_path)
0
def step(self, observations, states): vec_obs = self.obs_vectorizer.to_vecs(observations) feed_dict = { self.seq_lens_ph: [1] * len(observations), self.is_init_state_ph: [False] * len(observations), self.obs_ph: [[x] for x in vec_obs], self.mask_ph: [[1]] * len(observations) } if isinstance(self.first_state_ph, tuple): assert isinstance(states, tuple) for key, value in zip(self.first_state_ph, states): feed_dict[key] = value else: feed_dict[self.first_state_ph] = states acts, vals, states = self.session.run((self.actor_out, self.critic_out, self.states_out), feed_dict) action_params = [a[0] for a in acts] return { 'action_params': action_params, 'actions': self.action_dist.sample(action_params), 'states': states, 'values': np.array(vals).flatten() }
1
def _get_backend(fname): in_doc = InputDocument( path_or_stream=fname, format=InputFormat.ASCIIDOC, backend=AsciiDocBackend, ) doc_backend = in_doc._backend return doc_backend
0
def test_request_with_query_params(): """Test a request with query parameters.""" request = RequestModel( name="Request with query params", method="GET", url="https://example.com/api/search", params=[ QueryParam(name="q", value="test query"), QueryParam(name="page", value="1"), QueryParam(name="disabled", value="true", enabled=False), ], ) expected = "curl \\\n 'https://example.com/api/search?q=test+query&page=1'" assert request.to_curl() == expected
0
def get_all_songs(): songs = [] for root, dirs, files in os.walk(song_dir): for file in files: if file.endswith(".mp3"): songs.append(file) return songs
0
def valMeasuredParameter(self, obj): """Function to add ParameterName :param obj: element to add ParameterName """ valuesQAStats = [] valuesQAFlags = [] valuesParameter = [] for i in self.parModis: for val in i.retMeasure().values(): valuesQAStats.append(val['QAStats']) valuesQAFlags.append(val['QAFlags']) valuesParameter.append(val['ParameterName']) for i in set(valuesParameter): pn = self.ElementTree.SubElement(obj, 'ParameterName') pn.text = i
1
def __init__( self, config: "AgentConfig", id: str | None = None, name: str | None = None, agent0: "Agent|None" = None, log: Log.Log | None = None, paused: bool = False, streaming_agent: "Agent|None" = None, created_at: datetime | None = None, type: AgentContextType = AgentContextType.USER, last_message: datetime | None = None, ): # build context self.id = id or str(uuid.uuid4()) self.name = name self.config = config self.log = log or Log.Log() self.agent0 = agent0 or Agent(0, self.config, self) self.paused = paused self.streaming_agent = streaming_agent self.task: DeferredTask | None = None self.created_at = created_at or datetime.now(timezone.utc) self.type = type AgentContext._counter += 1 self.no = AgentContext._counter # set to start of unix epoch self.last_message = last_message or datetime.now(timezone.utc) existing = self._contexts.get(self.id, None) if existing: AgentContext.remove(self.id) self._contexts[self.id] = self
0
def setPixmap(self, pixmap): self.imageLabel.setPixmap(pixmap) self.hasImage = True
1
def __repr__(self): return '[%s] type=%s, value=%s' % (self.name, self.type, str(self.value))
1
def extract_url_content(url): downloaded = trafilatura.fetch_url(url) content = trafilatura.extract(downloaded) return {"url":url, "content":content}
0
def get_conv_template(name: str) -> Conversation: """Get a conversation template.""" return conv_templates[name].copy()
0
def no_stream_requests(ques, output_file): url = 'https://qanything-local-test-265.site.youdao.com/api/local_doc_qa/local_doc_chat' headers = {'content-type': 'application/json'} data = { "kb_ids": [ "KBf46828db208c4289a120a34f0fc96147", "KBc2440f13e98f4736b5ef81cfaebef3a9", "KBb78af28c73f74fb4ae6ad44b3c53302f", "KB6c2b097d83be430ab809e361fa8dcc8b", "KB69331d593f5b4b5bb555a0ea1b145e5b", "KB3cdc79f8c8d24a14bffd27e6570c33da" ], "question": ques, "user_id": "liujx_265", "streaming": False, "rerank": True, "history": [] } try: response = requests.post(url=url, headers=headers, json=data, timeout=60) res = response.json() res = data['question'] + '::' + res['response'] print(res) write_to_file_safe(output_file, res) except Exception as e: print(f"请求发送失败: {e}")
0
def test_query_ignore_older(self): """ wineventlog - Query by time (ignore_older than 2s) """ self.write_event_log(">=2 seconds old", eventID=20) time.sleep(2) self.write_event_log("~0 seconds old", eventID=10) evts = self.read_events(config={ "event_logs": [ { "name": self.providerName, "api": self.api, "ignore_older": "2s" } ] }) self.assertTrue(len(evts), 1) self.assertEqual(evts[0]["winlog.event_id"], 10) self.assertEqual(evts[0]["event.code"], 10)
1
def testLargePromptHint2(self): local_pdf_path = os.path.join(os.path.dirname(__file__), "gnarly_pdfs", "large_prompt_hint2.pdf") anchor_text = get_anchor_text(local_pdf_path, 2, pdf_engine="pdfreport") print(anchor_text) print(len(anchor_text)) self.assertLessEqual(len(anchor_text), 4000)
0
def test_soft_delete_instance(self): self._test_compute_api('soft_delete_instance', 'cast', instance=self.fake_instance_obj)
1
def hsv_to_rgb_handler(converter: TensorFlowConverter, tf_op: "tf.Operation"): raise NotImplementedError(f"[TensorFlowConverter] {tf_op.type} is not supported yet.")
1
def test_accuracy_reward_correct_answer(self): """Test accuracy_reward with a correct answer.""" completion = [[{"content": r"\boxed{\frac{63}{400}}"}]] solution = [r"\frac{63}{400}"] rewards = accuracy_reward(completion, solution) self.assertEqual(rewards[0], 1.0)
0
def repr_instance(self, x, level): try: s = builtins.repr(x) # Bugs in x.__repr__() can cause arbitrary # exceptions -- then make up something except Exception: return '<%s instance at %#x>' % (x.__class__.__name__, id(x)) if len(s) > self.maxother: i = max(0, (self.maxother-3)//2) j = max(0, self.maxother-3-i) s = s[:i] + '...' + s[len(s)-j:] return s
1
def __init__(self, model): self.model = model model.changed.connect(self.model_changed)
1
def _dispatch(self, method, params): """Dispatches the XML-RPC method. XML-RPC calls are forwarded to a registered function that matches the called XML-RPC method name. If no such function exists then the call is forwarded to the registered instance, if available. If the registered instance has a _dispatch method then that method will be called with the name of the XML-RPC method and its parameters as a tuple e.g. instance._dispatch('add',(2,3)) If the registered instance does not have a _dispatch method then the instance will be searched to find a matching method and, if found, will be called. Methods beginning with an '_' are considered private and will not be called. """ func = None try: # check to see if a matching function has been registered func = self.funcs[method] except KeyError: if self.instance is not None: # check for a _dispatch method if hasattr(self.instance, '_dispatch'): return self.instance._dispatch(method, params) else: # call instance method directly try: func = resolve_dotted_attribute( self.instance, method, self.allow_dotted_names ) except AttributeError: pass if func is not None: return func(*params) else: raise Exception('method "%s" is not supported' % method)
1
def current_temperature(self): """Return the current temperature.""" return self._current_temperature
1
def transition_power_noise_accumulator(self, num_steps): """Computes power sums in closed form.""" def _pack_and_reshape(*values): return array_ops.reshape( array_ops.stack(axis=1, values=values), array_ops.concat(values=[array_ops.shape(num_steps), [2, 2]], axis=0)) num_steps = math_ops.cast(num_steps, self.dtype) noise_transitions = num_steps - 1 noise_transform = ops.convert_to_tensor(self.get_noise_transform(), self.dtype) noise_covariance_transformed = math_ops.matmul( math_ops.matmul(noise_transform, self.state_transition_noise_covariance), noise_transform, adjoint_b=True) # Un-packing the transformed noise as: # [[a b] # [c d]] a, b, c, d = array_ops.unstack( array_ops.reshape(noise_covariance_transformed, [-1, 4]), axis=1) sum_of_first_n = noise_transitions * (noise_transitions + 1) / 2 sum_of_first_n_squares = sum_of_first_n * (2 * noise_transitions + 1) / 3 return _pack_and_reshape( num_steps * a + sum_of_first_n * (b + c) + sum_of_first_n_squares * d, num_steps * b + sum_of_first_n * d, num_steps * c + sum_of_first_n * d, num_steps * d)
1
def test_get_not_found(self): """Test retrieving a value that doesn't exist in the cache.""" cache = TimedCache(datetime.timedelta(seconds=10)) # Key doesn't exist assert cache.get("nonexistent_key") is TimedCache.NOT_FOUND
0
def create_check_constraint(self, name, source, condition, schema=None, **kw): """Issue a "create check constraint" instruction using the current migration context. e.g.:: from alembic import op from sqlalchemy.sql import column, func op.create_check_constraint( "ck_user_name_len", "user", func.len(column('name')) > 5 ) CHECK constraints are usually against a SQL expression, so ad-hoc table metadata is usually needed. The function will convert the given arguments into a :class:`sqlalchemy.schema.CheckConstraint` bound to an anonymous table in order to emit the CREATE statement. :param name: Name of the check constraint. The name is necessary so that an ALTER statement can be emitted. For setups that use an automated naming scheme such as that described at `NamingConventions <http://www.sqlalchemy.org/trac/wiki/UsageRecipes/NamingConventions>`_, ``name`` here can be ``None``, as the event listener will apply the name to the constraint object when it is associated with the table. :param source: String name of the source table. :param condition: SQL expression that's the condition of the constraint. Can be a string or SQLAlchemy expression language structure. :param deferrable: optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint. :param initially: optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint. :param schema: Optional schema name to operate within. ..versionadded:: 0.4.0 """ self.impl.add_constraint( self._check_constraint(name, source, condition, schema=schema, **kw) )
1
def update_profile_model(file_path: str): profile_rules: dict = http_get( url='https://miot-spec.org/instance/translate/models') if not profile_rules and 'models' not in profile_rules and not isinstance( profile_rules['models'], dict): raise ValueError('Failed to get profile rule') local_rules: dict = load_yaml_file( yaml_file=file_path) or {} for rule, ts in profile_rules['models'].items(): if rule not in local_rules: local_rules[rule] = {'ts': ts} else: local_rules[rule]['ts'] = ts for mode in SPECIAL_MODELS: if mode not in local_rules: local_rules[mode] = {'ts': 1531108800} else: local_rules[mode]['ts'] = 1531108800 local_rules = dict(sorted(local_rules.items())) save_yaml_file( yaml_file=file_path, data=local_rules)
0
def execute_command(self, command): """Execute a command in the main Blender thread""" try: return self._execute_command_internal(command) except Exception as e: print(f"Error executing command: {str(e)}") traceback.print_exc() return {"status": "error", "message": str(e)}
0
def __str__(self): return '.'.join(map(str, self))
1
def _set_retry_after(self, value): if value is None: if 'retry-after' in self.headers: del self.headers['retry-after'] return elif isinstance(value, datetime): value = http_date(value) else: value = str(value) self.headers['Retry-After'] = value
1
def test_lucene_sanitize(): # Call the function with test data queries = [ ( 'This has every escape character + - && || ! ( ) { } [ ] ^ " ~ * ? : \\ /', '\\This has every escape character \\+ \\- \\&\\& \\|\\| \\! \\( \\) \\{ \\} \\[ \\] \\^ \\" \\~ \\* \\? \\: \\\\ \\/', ), ('this has no escape characters', 'this has no escape characters'), ] for query, assert_result in queries: result = lucene_sanitize(query) assert assert_result == result
0
def __str__(self): s = 'app 0x%02x, verb 0x%02x, len %d' % (self.app, self.verb, len(self.data)) if len(self.data) > 0: s += ', data %s' % hexlify(self.data) return s
1
async def mock_embedding_func(texts): return np.random.rand(len(texts), 10) # 返回10维随机向量
0
def do_unrealize(self): if self._msg is not None: self._api.cancel(self._msg)
1
async def initialize( self, connection_type: str, server_url: str | None = None, ) -> None: """Initialize the MCP agent with the appropriate connection.""" logger.info(f"Initializing MCPAgent with {connection_type} connection...") if connection_type == "stdio": await self.agent.initialize( connection_type="stdio", command=sys.executable, args=["-m", self.server_reference], ) else: # sse await self.agent.initialize(connection_type="sse", server_url=server_url) logger.info(f"Connected to MCP server via {connection_type}")
0
def flake8(file_path: str) -> str: """Run flake8 on a given file and return the output as a string""" if Path(file_path).suffix != ".py": return "" cmd = registry.get("LINT_COMMAND", "flake8 --isolated --select=F821,F822,F831,E111,E112,E113,E999,E902 {file_path}") # don't use capture_output because it's not compatible with python3.6 out = subprocess.run(cmd.format(file_path=file_path), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) return out.stdout.decode()
0
def get_youtube_cache_path() -> str: """ Gets the path to the YouTube cache file. Returns: path (str): The path to the YouTube cache folder """ return os.path.join(get_cache_path(), 'youtube.json')
0
def replace_with_load_state( init_state: Any, load_state: Any, load_rename_rules: Optional[list[tuple[str, str]]] = None, load_exclude_rules: Optional[list[str]] = None, mesh_config: tuple = (1, 1), ) -> Any: flatten_load, _ = jax.tree_util.tree_flatten_with_path(load_state) flatten_init, structure_init = jax.tree_util.tree_flatten_with_path(init_state) load_map = {path_tuple_to_string(path): tensor for path, tensor in flatten_load} replaced = [] num_replicas = 1 data_model_shards = math.prod(mesh_config) for i, (init_path, tensor) in enumerate(flatten_init): init_path_str = path_tuple_to_string(init_path) load_path_str = get_load_path_str(init_path_str, load_rename_rules, load_exclude_rules) if load_path_str is None: rank_logger.info(f"Excluded from restore: {init_path_str}.") replaced.append(tensor) elif load_path_str in load_map: if load_path_str == init_path_str: rank_logger.info(f"Restored from ckpt: {init_path_str}.") else: rank_logger.info(f"Restored from ckpt: {init_path_str} <-- {load_path_str}.") replaced.append(load_map[load_path_str]) else: rank_logger.info(f"Not found in ckpt: {init_path_str}.") if (i % num_replicas) == ((jax.process_index() // data_model_shards) % num_replicas): replaced.append(tensor) else: replaced.append(np.zeros_like(tensor)) return jax.tree_util.tree_unflatten(structure_init, replaced)
0
def drop_column(self, table_name, column_name, **kw): """Issue a "drop column" instruction using the current migration context. e.g.:: drop_column('organization', 'account_id') :param table_name: name of table :param column_name: name of column :param schema: Optional schema name to operate within. .. versionadded:: 0.4.0 :param mssql_drop_check: Optional boolean. When ``True``, on Microsoft SQL Server only, first drop the CHECK constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.check_constraints, then exec's a separate DROP CONSTRAINT for that constraint. :param mssql_drop_default: Optional boolean. When ``True``, on Microsoft SQL Server only, first drop the DEFAULT constraint on the column using a SQL-script-compatible block that selects into a @variable from sys.default_constraints, then exec's a separate DROP CONSTRAINT for that default. """ self.impl.drop_column( table_name, self._column(column_name, NULLTYPE), **kw )
1
def __init__(self, ble, name='mpy-uart', rxbuf=100): self._ble = ble self._ble.active(True) self._ble.irq(handler=self._irq) ((self._tx_handle, self._rx_handle,),) = self._ble.gatts_register_services((_UART_SERVICE,)) # Increase the size of the rx buffer and enable append mode. self._ble.gatts_set_buffer(self._rx_handle, rxbuf, True) self._connections = set() self._rx_buffer = bytearray() self._handler = None # Optionally add services=[_UART_UUID], but this is likely to make the payload too large. self._payload = advertising_payload(name=name, appearance=_ADV_APPEARANCE_GENERIC_COMPUTER) self._advertise()
1
def main(args): all_task = [executor.submit(single_job, utt) for utt in utt2wav.keys()] utt2speech_token = {} for future in tqdm(as_completed(all_task)): utt, speech_token = future.result() utt2speech_token[utt] = speech_token torch.save(utt2speech_token, '{}/utt2speech_token.pt'.format(args.dir))
0
def __iadd__(self, other): return self + other
1
def yolov10_inference_for_examples(image, model_path, image_size, conf_threshold): annotated_image, _ = yolov10_inference(image, None, model_path, image_size, conf_threshold) return annotated_image
0
def _tokenize_text_segment(self, text: str, speaker: int) -> Tuple[torch.Tensor, torch.Tensor]: frame_tokens = [] frame_masks = [] text_tokens = self._text_tokenizer.encode(f"[{speaker}]{text}") text_frame = torch.zeros(len(text_tokens), 33).long() text_frame_mask = torch.zeros(len(text_tokens), 33).bool() text_frame[:, -1] = torch.tensor(text_tokens) text_frame_mask[:, -1] = True frame_tokens.append(text_frame.to(self.device)) frame_masks.append(text_frame_mask.to(self.device)) return torch.cat(frame_tokens, dim=0), torch.cat(frame_masks, dim=0)
0
def import_model( in_path: Path, out_path: Path, weights_per_step_schedule: list[int] | None = None, silent: bool = False, max_out_n_q: int | None = None, ) -> None: if in_path.suffix == ".safetensors": tch_model = load_file(in_path) else: pkg = torch.load(in_path, map_location=torch.device("cpu"), weights_only=False) tch_model = pkg["fsdp_best_state"]["model"] in_n_q: int | None = None for idx in range(999): name = f"emb.{idx}.weight" if name not in tch_model: in_n_q = idx break out_n_q: int | None = None for idx in range(999): name = f"linears.{idx}.weight" if name not in tch_model: out_n_q = idx break assert in_n_q is not None assert out_n_q is not None if not silent: print(f"in_n_q: {in_n_q}, out_n_q: {out_n_q}") if weights_per_step_schedule is not None: if len(weights_per_step_schedule) != out_n_q: raise ValueError("inconsistent weights_per_step_schedule", len(weights_per_step_schedule), out_n_q) depformer_layers: int | None = None for idx in range(999): if f"depformer.layers.{idx}.self_attn.in_proj_weight" not in tch_model: depformer_layers = idx break assert depformer_layers is not None if not silent: print(f"depformer layers: {depformer_layers}") model = {} for name in ["text_emb.weight", "text_linear.weight"]: model[name] = tch_model[name] for name in tch_model.keys(): if name.startswith("condition_provider.conditioners"): model[name] = tch_model[name] model["out_norm.weight"] = tch_model["out_norm.alpha"][0, 0] for idx in range(in_n_q): src_name = f"emb.{idx}.weight" dst_name = f"audio_embs.{idx}.weight" model[dst_name] = tch_model[src_name] for k, v in sorted(tch_model.items()): print(k, v.shape, v.dtype) if k.startswith("transformer"): if k.endswith(".alpha"): v = v[0, 0] k = k.replace(".alpha", ".weight") k = k.replace(".in_proj_weight", ".in_proj.weight") model[k] = v # Only export the first slices of the depformer (main). if max_out_n_q is not None: exported_out_n_q = min(max_out_n_q, out_n_q) print(f"only exporting the first {exported_out_n_q} depformer layers") else: exported_out_n_q = out_n_q max_df_steps = out_n_q if weights_per_step_schedule is not None: max_df_steps = max(weights_per_step_schedule) + 1 for idx in range(exported_out_n_q): if weights_per_step_schedule is not None: tch_idx = weights_per_step_schedule[idx] else: tch_idx = idx base = f"depformer.slices.{idx}." model[base + "linear_in.weight"] = tch_model[f"depformer_in.{tch_idx}.weight"].clone() model[base + "linear_out.weight"] = tch_model[f"linears.{idx}.weight"] if idx == 0: model[base + "emb.weight"] = tch_model["depformer_text_emb.weight"] if "depformer_text_emb.low_rank.weight" in tch_model: model[base + "emb.low_rank.weight"] = tch_model["depformer_text_emb.low_rank.weight"].clone() else: model[base + "emb.weight"] = tch_model[f"depformer_emb.{idx-1}.weight"].clone() if f"depformer_emb.{idx-1}.low_rank.weight" in tch_model: model[base + "emb.low_rank.weight"] = tch_model[f"depformer_emb.{idx-1}.low_rank.weight"].clone() for layer_idx in range(depformer_layers): layer = base + f"transformer.layers.{layer_idx}." # WARNING: note that this uses in_proj_weight vs out_proj.weight model[layer + "self_attn.in_proj.weight"] = ( tch_model[f"depformer.layers.{layer_idx}.self_attn.in_proj_weight"] .chunk(max_df_steps)[tch_idx] .clone() ) model[layer + "self_attn.out_proj.weight"] = ( tch_model[f"depformer.layers.{layer_idx}.self_attn.out_proj.weight"] .chunk(max_df_steps)[tch_idx] .clone() ) model[layer + "norm1.weight"] = tch_model[ f"depformer.layers.{layer_idx}.norm1.alpha" ][0, 0].clone() model[layer + "norm2.weight"] = tch_model[ f"depformer.layers.{layer_idx}.norm2.alpha" ][0, 0].clone() model[layer + "gating.linear_in.weight"] = tch_model[ f"depformer.layers.{layer_idx}.gating.{tch_idx}.linear_in.weight" ].clone() model[layer + "gating.linear_out.weight"] = tch_model[ f"depformer.layers.{layer_idx}.gating.{tch_idx}.linear_out.weight" ].clone() save_file(model, out_path)
0
def __init__( self, model="F5TTS_v1_Base", ckpt_file="", vocab_file="", ode_method="euler", use_ema=True, vocoder_local_path=None, device=None, hf_cache_dir=None, ): model_cfg = OmegaConf.load(str(files("f5_tts").joinpath(f"configs/{model}.yaml"))) model_cls = get_class(f"f5_tts.model.{model_cfg.model.backbone}") model_arc = model_cfg.model.arch self.mel_spec_type = model_cfg.model.mel_spec.mel_spec_type self.target_sample_rate = model_cfg.model.mel_spec.target_sample_rate self.ode_method = ode_method self.use_ema = use_ema if device is not None: self.device = device else: import torch self.device = ( "cuda" if torch.cuda.is_available() else "xpu" if torch.xpu.is_available() else "mps" if torch.backends.mps.is_available() else "cpu" ) # Load models self.vocoder = load_vocoder( self.mel_spec_type, vocoder_local_path is not None, vocoder_local_path, self.device, hf_cache_dir ) repo_name, ckpt_step, ckpt_type = "F5-TTS", 1250000, "safetensors" # override for previous models if model == "F5TTS_Base": if self.mel_spec_type == "vocos": ckpt_step = 1200000 elif self.mel_spec_type == "bigvgan": model = "F5TTS_Base_bigvgan" ckpt_type = "pt" elif model == "E2TTS_Base": repo_name = "E2-TTS" ckpt_step = 1200000 if not ckpt_file: ckpt_file = str( cached_path(f"hf://SWivid/{repo_name}/{model}/model_{ckpt_step}.{ckpt_type}", cache_dir=hf_cache_dir) ) self.ema_model = load_model( model_cls, model_arc, ckpt_file, self.mel_spec_type, vocab_file, self.ode_method, self.use_ema, self.device )
0
def validate_email(email: str) -> bool: """Validate the format of an email address.""" return bool(ConfigValidator.EMAIL_REGEX.match(email))
0
def irq(self, handler): self._handler = handler
1
def testGetChannelIndex(self): data_formats = ['NHWC', 'NCHW'] for data_format in data_formats: index = nasnet_utils.get_channel_index(data_format) correct_index = 3 if data_format == 'NHWC' else 1 self.assertEqual(index, correct_index)
1
def finalize_options(self): """Set final values for all the options that this command supports. This is always called as late as possible, ie. after any option assignments from the command-line or from other commands have been done. Thus, this is the place to code option dependencies: if 'foo' depends on 'bar', then it is safe to set 'foo' from 'bar' as long as 'foo' still has the same value it was assigned in 'initialize_options()'. This method must be implemented by all command classes. """ raise RuntimeError("abstract method -- subclass %s must override" % self.__class__)
1
def __init__(self, api, *args, **kwargs): self.api = api self.as_generator = kwargs.pop("as_generator", False) self.return_json = kwargs.pop("return_json", True) self.parameters = {} self._build_parameters(args, kwargs) self._build_path()
1
def default_value(self): """Returns the default value of the platform parameter. Returns: *. The default value of the platform parameter. """ return self._default_value
1
def test_add_metrics_individual_params(tracer): """Test adding metrics using individual parameters""" tracer.trace = {} # Initialize trace tracer.add_metrics( name="test_metric", score=0.95, reasoning="Good performance", cost=0.01, latency=100, metadata={"key": "value"}, config={"threshold": 0.8} ) assert len(tracer.trace_metrics) == 1 metric = tracer.trace_metrics[0] assert metric["name"] == "test_metric" assert metric["score"] == 0.95 assert metric["reason"] == "Good performance" assert metric["source"] == "user" assert metric["cost"] == 0.01 assert metric["latency"] == 100 assert metric["metadata"] == {"key": "value"} assert metric["config"] == {"threshold": 0.8}
0
def __repr__(self): return f"Flake8Error(filename={self.filename}, line_number={self.line_number}, col_number={self.col_number}, problem={self.problem})"
0
def parse_nms_url(url): """Parse NMS url into normalized parts like scheme, user, host and others. Example NMS URL: auto://admin:[email protected]:2000/ NMS URL parts: .. code-block:: none auto True if url starts with auto://, protocol will be automatically switched to https if http not supported; scheme (auto) connection protocol (http or https); user (admin) NMS user; password (nexenta) NMS password; host (192.168.1.1) NMS host; port (2000) NMS port. :param url: url string :return: tuple (auto, scheme, user, password, host, port, path) """ pr = urlparse.urlparse(url) scheme = pr.scheme auto = scheme == 'auto' if auto: scheme = 'http' user = 'admin' password = 'nexenta' if '@' not in pr.netloc: host_and_port = pr.netloc else: user_and_password, host_and_port = pr.netloc.split('@', 1) if ':' in user_and_password: user, password = user_and_password.split(':') else: user = user_and_password if ':' in host_and_port: host, port = host_and_port.split(':', 1) else: host, port = host_and_port, '2000' return auto, scheme, user, password, host, port, '/rest/nms/'
1
def load_video_dir(root, dirs, save_dir, save_name): videos, sparse_videos = [], [] first_videos = [] for idx, cdir in enumerate(dirs): annot_path = osp.join(root, cdir, 'annot') frame_path = osp.join(root, cdir, 'extraction') all_frames = glob.glob( osp.join(frame_path, '*.png') ) all_annots = glob.glob( osp.join(annot_path, '*.pts') ) assert len(all_frames) == len(all_annots), 'The length is not right for {} : {} vs {}'.format(cdir, len(all_frames), len(all_annots)) all_frames = sorted(all_frames) all_annots = sorted(all_annots) current_video = [] txtfile = open(osp.join(save_dir, save_name + cdir), 'w') nonefile = open(osp.join(save_dir, save_name + cdir + '.none'), 'w') all_sizes = [] for frame, annot in zip(all_frames, all_annots): basename_f = osp.basename(frame) basename_a = osp.basename(annot) assert basename_a[:6] == basename_f[:6], 'The name of {} is not right with {}'.format(frame, annot) current_video.append( (frame, annot) ) box_str = datasets.dataset_utils.for_generate_box_str(annot, 68, EXPAND_RATIO) txtfile.write('{} {} {}\n'.format(frame, annot, box_str)) nonefile.write('{} None {}\n'.format(frame, box_str)) all_sizes.append( str2size(box_str) ) if len(current_video) == 1: first_videos.append( (frame, annot) ) txtfile.close() nonefile.close() videos.append( current_video ) all_sizes = np.array( all_sizes ) print ('--->>> {:} : [{:02d}/{:02d}] : {:} has {:} frames | face size : mean={:.2f}, std={:.2f}'.format(save_name, idx, len(dirs), cdir, len(all_frames), all_sizes.mean(), all_sizes.std())) for jxj, video in enumerate(current_video): if jxj <= 3 or jxj + 3 >= len(current_video): continue if jxj % 10 == 3: sparse_videos.append( video ) txtfile = open(osp.join(save_dir, save_name), 'w') nonefile = open(osp.join(save_dir, save_name + '.none'), 'w') for video in videos: for cpair in video: box_str = datasets.dataset_utils.for_generate_box_str(cpair[1], 68, EXPAND_RATIO) txtfile.write('{} {} {}\n'.format(cpair[0], cpair[1], box_str)) nonefile.write('{} {} {}\n'.format(cpair[0], 'None', box_str)) txtfile.flush() nonefile.flush() txtfile.close() nonefile.close() txtfile = open(osp.join(save_dir, save_name + '.sparse' + afterfix), 'w') nonefile = open(osp.join(save_dir, save_name + '.sparse.none' + afterfix), 'w') for cpair in sparse_videos: box_str = datasets.dataset_utils.for_generate_box_str(cpair[1], 68, EXPAND_RATIO) txtfile.write('{} {} {}\n'.format(cpair[0], cpair[1], box_str)) nonefile.write('{} {} {}\n'.format(cpair[0], 'None', box_str)) txtfile.close() nonefile.close() txtfile = open(osp.join(save_dir, save_name + '.first'), 'w') for cpair in first_videos: box_str = datasets.dataset_utils.for_generate_box_str(cpair[1], 68, EXPAND_RATIO) txtfile.write('{} {} {}\n'.format(cpair[0], cpair[1], box_str)) txtfile.close() print ('{} finish save into {}'.format(save_name, save_dir)) return videos
1
def _get_browser_options(self, user_agent=None): """获取浏览器配置""" co = ChromiumOptions() try: extension_path = self._get_extension_path("turnstilePatch") co.add_extension(extension_path) except FileNotFoundError as e: logging.warning(f"警告: {e}") browser_path = os.getenv("BROWSER_PATH") if browser_path: co.set_paths(browser_path=browser_path) co.set_pref("credentials_enable_service", False) co.set_argument("--hide-crash-restore-bubble") proxy = os.getenv("BROWSER_PROXY") if proxy: co.set_proxy(proxy) co.auto_port() if user_agent: co.set_user_agent(user_agent) co.headless( os.getenv("BROWSER_HEADLESS", "True").lower() == "true" ) # 生产环境使用无头模式 # Mac 系统特殊处理 if sys.platform == "darwin": co.set_argument("--no-sandbox") co.set_argument("--disable-gpu") return co
0
def collect_all_content(self): """ Collects all content from the current node and its descendants. Returns: Set[int]: A set containing all content from the current node and its descendants. """ all_content = set(self.content) for child in self.children: all_content.update(child.collect_all_content()) return all_content
0
def test_swap_volume(self): self._test_compute_api('swap_volume', 'cast', instance=self.fake_instance_obj, old_volume_id='oldid', new_volume_id='newid', new_attachment_id=uuids.attachment_id, version='5.0')
1
def register_post(): return {}
1
def value(self, t): """Generates the value given a timestep (based on schedule's logic). Args: t (int): The time step. This could be a tf.Tensor. Returns: any: The calculated value depending on the schedule and `t`. """ if self.framework in ["tf2", "tf", "tfe"]: return self._tf_value_op(t) return self._value(t)
1
def save_keys_to_config(cfg_key, value): value = value.replace(" ", "") if value: config.app[cfg_key] = value.split(",")
0
def write_to_file_safe(file_name, data): # 获取锁 with lock: with open(file_name, 'a') as f: f.write(data + '\n')
0
def __init__(self, num_capsules, num_route_nodes, in_channels, out_channels, kernel_size=None, stride=None, num_iterations=p.NUM_ROUTING_ITERATIONS, use_cuda=False): super(CapsuleLayer, self).__init__() self.num_route_nodes = num_route_nodes self.num_iterations = num_iterations self.num_capsules = num_capsules if num_route_nodes != -1: self.route_weights = nn.Parameter(torch.randn(num_capsules, num_route_nodes, in_channels, out_channels)) else: self.capsules = nn.ModuleList( [nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=0) for _ in range(num_capsules)]) self.use_cuda = use_cuda
1
def release_editing_lock(self, tid): """ Release the editing lock on a task. The caller is trusted to have the lock and no verification is made. """ c = self.connection.cursor() c.execute(""" UPDATE Task SET editing = 0 WHERE id = ? """, (tid,)) self.connection.commit()
1
def mock_playwright(): with patch("playwright.async_api.async_playwright") as mock: mock_pw = MockPlaywright() mock_browser = MockBrowser() mock_context = MockContext() mock_page = MockPage() mock_pw.chromium.launch.return_value = mock_browser mock_pw.firefox.launch.return_value = mock_browser mock_browser.new_context.return_value = mock_context mock_context.new_page.return_value = mock_page mock.return_value.__aenter__.return_value = mock_pw yield mock_pw, mock_browser, mock_context, mock_page
0
def render_token(t: bytes) -> str: # pretty print a token, escaping control characters s = t.decode('utf-8', errors='replace') s = replace_control_characters(s) return s
0
def _RefIdGrad(_, grad): return grad
1
def _repr_(self): r""" EXAMPLES :: sage: NilCoxeterAlgebra(WeylGroup(['A',3,1])) # indirect doctest The Nil-Coxeter Algebra of Type A3~ over Rational Field """ return "The Nil-Coxeter Algebra of Type %s over %s"%(self._cartan_type._repr_(compact=True), self.base_ring())
1
def end_process(): stream.input_queue.push('end')
0
def __init__(self, *args, **kwargs): if kwargs.get('empty_permitted', True): kwargs['use_required_attribute'] = False super(PricingForm, self).__init__(*args, **kwargs) # Setup initial values for billing_cycle and billing_dt_select # in order to have empty values for extra forms. if self.instance.pk: self.fields['billing_dt_select'].initial = [self.instance.num_days, self.instance.due_sore] self.fields['billing_cycle'].initial = [self.instance.billing_frequency, self.instance.billing_period] else: self.fields['billing_dt_select'].initial = [0, u'start'] self.fields['billing_cycle'].initial = [1, u'month'] # Add class for recurring payment fields recurring_payment_fields = [ 'taxable', 'tax_rate', 'billing_cycle', 'billing_dt_select', 'has_trial_period', 'trial_period_days' ] for field in recurring_payment_fields: class_attr = self.fields[field].widget.attrs.get('class', None) if class_attr and 'recurring-payment' not in class_attr: class_attr += ' recurring-payment' self.fields[field].widget.attrs.update({'class': class_attr})
1
def test_with_ca(self, tmpdir): ca = certs.CertStore.from_store(str(tmpdir), "test", 2048) r = certs.dummy_cert( ca.default_privatekey, ca.default_ca, b"foo.com", [b"one.com", b"two.com", b"*.three.com", b"127.0.0.1"], b"Foo Ltd." ) assert r.cn == b"foo.com" assert r.altnames == [b'one.com', b'two.com', b'*.three.com'] assert r.organization == b"Foo Ltd." r = certs.dummy_cert( ca.default_privatekey, ca.default_ca, None, [], None ) assert r.cn is None assert r.organization is None assert r.altnames == []
1
def pytest_recording_configure(config: Any, vcr: VCR): from . import json_body_serializer vcr.register_serializer('yaml', json_body_serializer) def method_matcher(r1: vcr_request.Request, r2: vcr_request.Request) -> None: if r1.method.upper() != r2.method.upper(): raise AssertionError(f'{r1.method} != {r2.method}') vcr.register_matcher('method', method_matcher)
0
def __init__(self, family, address): BaseTestHandler.__init__(self) self.create_socket(family) self.connect(address)
1
def get_VerificationStatus(self): return self.get_query_params().get('VerificationStatus')
1
def get_asr_converter(): """Create a DocumentConverter configured for ASR with whisper_turbo model.""" pipeline_options = AsrPipelineOptions() pipeline_options.asr_options = asr_model_specs.WHISPER_TINY converter = DocumentConverter( format_options={ InputFormat.AUDIO: AudioFormatOption( pipeline_cls=AsrPipeline, pipeline_options=pipeline_options, ) } ) return converter
0
def public(request: Request): root_url = gr.route_utils.get_root_url(request, "/", None) return RedirectResponse(url=f"{root_url}/app/")
0
def test_best_model2_alignment(self): # arrange sentence_pair = AlignedSent( TestIBMModel.__TEST_TRG_SENTENCE, TestIBMModel.__TEST_SRC_SENTENCE ) # None and 'bien' have zero fertility translation_table = { 'i': {"j'": 0.9, 'aime': 0.05, 'bien': 0.02, 'jambon': 0.03, None: 0}, 'love': {"j'": 0.05, 'aime': 0.9, 'bien': 0.01, 'jambon': 0.01, None: 0.03}, 'ham': {"j'": 0, 'aime': 0.01, 'bien': 0, 'jambon': 0.99, None: 0}, } alignment_table = defaultdict( lambda: defaultdict(lambda: defaultdict(lambda: defaultdict(lambda: 0.2))) ) ibm_model = IBMModel([]) ibm_model.translation_table = translation_table ibm_model.alignment_table = alignment_table # act a_info = ibm_model.best_model2_alignment(sentence_pair) # assert self.assertEqual(a_info.alignment[1:], (1, 2, 4)) # 0th element unused self.assertEqual(a_info.cepts, [[], [1], [2], [], [3]])
1
def run_test(self): self.log.info('prepare some coins for multiple *rawtransaction commands') self.nodes[2].generate(1) self.sync_all() self.nodes[0].generate(101) self.sync_all() self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(),1.5) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(),1.0) self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(),5.0) self.sync_all() self.nodes[0].generate(5) self.sync_all() self.log.info('Test getrawtransaction on genesis block coinbase returns an error') block = self.nodes[0].getblock(self.nodes[0].getblockhash(0)) assert_raises_rpc_error(-5, "The genesis block coinbase is not considered an ordinary transaction", self.nodes[0].getrawtransaction, block['merkleroot']) self.log.info('Check parameter types and required parameters of createrawtransaction') # Test `createrawtransaction` required parameters assert_raises_rpc_error(-1, "createrawtransaction", self.nodes[0].createrawtransaction) assert_raises_rpc_error(-1, "createrawtransaction", self.nodes[0].createrawtransaction, []) # Test `createrawtransaction` invalid extra parameters assert_raises_rpc_error(-1, "createrawtransaction", self.nodes[0].createrawtransaction, [], {}, 0, False, 'foo') # Test `createrawtransaction` invalid `inputs` txid = '1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000' assert_raises_rpc_error(-3, "Expected type array", self.nodes[0].createrawtransaction, 'foo', {}) assert_raises_rpc_error(-1, "JSON value is not an object as expected", self.nodes[0].createrawtransaction, ['foo'], {}) assert_raises_rpc_error(-1, "JSON value is not a string as expected", self.nodes[0].createrawtransaction, [{}], {}) assert_raises_rpc_error(-8, "txid must be of length 64 (not 3, for 'foo')", self.nodes[0].createrawtransaction, [{'txid': 'foo'}], {}) assert_raises_rpc_error(-8, "txid must be hexadecimal string (not 'ZZZ7bb8b1697ea987f3b223ba7819250cae33efacb068d23dc24859824a77844')", self.nodes[0].createrawtransaction, [{'txid': 'ZZZ7bb8b1697ea987f3b223ba7819250cae33efacb068d23dc24859824a77844'}], {}) assert_raises_rpc_error(-8, "Invalid parameter, missing vout key", self.nodes[0].createrawtransaction, [{'txid': txid}], {}) assert_raises_rpc_error(-8, "Invalid parameter, missing vout key", self.nodes[0].createrawtransaction, [{'txid': txid, 'vout': 'foo'}], {}) assert_raises_rpc_error(-8, "Invalid parameter, vout must be positive", self.nodes[0].createrawtransaction, [{'txid': txid, 'vout': -1}], {}) assert_raises_rpc_error(-8, "Invalid parameter, sequence number is out of range", self.nodes[0].createrawtransaction, [{'txid': txid, 'vout': 0, 'sequence': -1}], {}) # Test `createrawtransaction` invalid `outputs` address = self.nodes[0].getnewaddress() address2 = self.nodes[0].getnewaddress() assert_raises_rpc_error(-1, "JSON value is not an array as expected", self.nodes[0].createrawtransaction, [], 'foo') self.nodes[0].createrawtransaction(inputs=[], outputs={}) # Should not throw for backwards compatibility self.nodes[0].createrawtransaction(inputs=[], outputs=[]) assert_raises_rpc_error(-8, "Data must be hexadecimal string", self.nodes[0].createrawtransaction, [], {'data': 'foo'}) assert_raises_rpc_error(-5, "Invalid Feathercoin address", self.nodes[0].createrawtransaction, [], {'foo': 0}) assert_raises_rpc_error(-3, "Invalid amount", self.nodes[0].createrawtransaction, [], {address: 'foo'}) assert_raises_rpc_error(-3, "Amount out of range", self.nodes[0].createrawtransaction, [], {address: -1}) assert_raises_rpc_error(-8, "Invalid parameter, duplicated address: %s" % address, self.nodes[0].createrawtransaction, [], multidict([(address, 1), (address, 1)])) assert_raises_rpc_error(-8, "Invalid parameter, duplicated address: %s" % address, self.nodes[0].createrawtransaction, [], [{address: 1}, {address: 1}]) assert_raises_rpc_error(-8, "Invalid parameter, duplicate key: data", self.nodes[0].createrawtransaction, [], [{"data": 'aa'}, {"data": "bb"}]) assert_raises_rpc_error(-8, "Invalid parameter, duplicate key: data", self.nodes[0].createrawtransaction, [], multidict([("data", 'aa'), ("data", "bb")])) assert_raises_rpc_error(-8, "Invalid parameter, key-value pair must contain exactly one key", self.nodes[0].createrawtransaction, [], [{'a': 1, 'b': 2}]) assert_raises_rpc_error(-8, "Invalid parameter, key-value pair not an object as expected", self.nodes[0].createrawtransaction, [], [['key-value pair1'], ['2']]) # Test `createrawtransaction` invalid `locktime` assert_raises_rpc_error(-3, "Expected type number", self.nodes[0].createrawtransaction, [], {}, 'foo') assert_raises_rpc_error(-8, "Invalid parameter, locktime out of range", self.nodes[0].createrawtransaction, [], {}, -1) assert_raises_rpc_error(-8, "Invalid parameter, locktime out of range", self.nodes[0].createrawtransaction, [], {}, 4294967296) # Test `createrawtransaction` invalid `replaceable` assert_raises_rpc_error(-3, "Expected type bool", self.nodes[0].createrawtransaction, [], {}, 0, 'foo') self.log.info('Check that createrawtransaction accepts an array and object as outputs') tx = CTransaction() # One output tx.deserialize(BytesIO(hex_str_to_bytes(self.nodes[2].createrawtransaction(inputs=[{'txid': txid, 'vout': 9}], outputs={address: 99})))) assert_equal(len(tx.vout), 1) assert_equal( tx.serialize().hex(), self.nodes[2].createrawtransaction(inputs=[{'txid': txid, 'vout': 9}], outputs=[{address: 99}]), ) # Two outputs tx.deserialize(BytesIO(hex_str_to_bytes(self.nodes[2].createrawtransaction(inputs=[{'txid': txid, 'vout': 9}], outputs=OrderedDict([(address, 99), (address2, 99)]))))) assert_equal(len(tx.vout), 2) assert_equal( tx.serialize().hex(), self.nodes[2].createrawtransaction(inputs=[{'txid': txid, 'vout': 9}], outputs=[{address: 99}, {address2: 99}]), ) # Multiple mixed outputs tx.deserialize(BytesIO(hex_str_to_bytes(self.nodes[2].createrawtransaction(inputs=[{'txid': txid, 'vout': 9}], outputs=multidict([(address, 99), (address2, 99), ('data', '99')]))))) assert_equal(len(tx.vout), 3) assert_equal( tx.serialize().hex(), self.nodes[2].createrawtransaction(inputs=[{'txid': txid, 'vout': 9}], outputs=[{address: 99}, {address2: 99}, {'data': '99'}]), ) for type in ["bech32", "p2sh-segwit", "legacy"]: addr = self.nodes[0].getnewaddress("", type) addrinfo = self.nodes[0].getaddressinfo(addr) pubkey = addrinfo["scriptPubKey"] self.log.info('sendrawtransaction with missing prevtx info (%s)' %(type)) # Test `signrawtransactionwithwallet` invalid `prevtxs` inputs = [ {'txid' : txid, 'vout' : 3, 'sequence' : 1000}] outputs = { self.nodes[0].getnewaddress() : 1 } rawtx = self.nodes[0].createrawtransaction(inputs, outputs) prevtx = dict(txid=txid, scriptPubKey=pubkey, vout=3, amount=1) succ = self.nodes[0].signrawtransactionwithwallet(rawtx, [prevtx]) assert succ["complete"] if type == "legacy": del prevtx["amount"] succ = self.nodes[0].signrawtransactionwithwallet(rawtx, [prevtx]) assert succ["complete"] if type != "legacy": assert_raises_rpc_error(-3, "Missing amount", self.nodes[0].signrawtransactionwithwallet, rawtx, [ { "txid": txid, "scriptPubKey": pubkey, "vout": 3, } ]) assert_raises_rpc_error(-3, "Missing vout", self.nodes[0].signrawtransactionwithwallet, rawtx, [ { "txid": txid, "scriptPubKey": pubkey, "amount": 1, } ]) assert_raises_rpc_error(-3, "Missing txid", self.nodes[0].signrawtransactionwithwallet, rawtx, [ { "scriptPubKey": pubkey, "vout": 3, "amount": 1, } ]) assert_raises_rpc_error(-3, "Missing scriptPubKey", self.nodes[0].signrawtransactionwithwallet, rawtx, [ { "txid": txid, "vout": 3, "amount": 1 } ]) ######################################### # sendrawtransaction with missing input # ######################################### self.log.info('sendrawtransaction with missing input') inputs = [ {'txid' : "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout' : 1}] #won't exists outputs = { self.nodes[0].getnewaddress() : 4.998 } rawtx = self.nodes[2].createrawtransaction(inputs, outputs) rawtx = self.nodes[2].signrawtransactionwithwallet(rawtx) # This will raise an exception since there are missing inputs assert_raises_rpc_error(-25, "Missing inputs", self.nodes[2].sendrawtransaction, rawtx['hex']) ##################################### # getrawtransaction with block hash # ##################################### # make a tx by sending then generate 2 blocks; block1 has the tx in it tx = self.nodes[2].sendtoaddress(self.nodes[1].getnewaddress(), 1) block1, block2 = self.nodes[2].generate(2) self.sync_all() # We should be able to get the raw transaction by providing the correct block gottx = self.nodes[0].getrawtransaction(tx, True, block1) assert_equal(gottx['txid'], tx) assert_equal(gottx['in_active_chain'], True) # We should not have the 'in_active_chain' flag when we don't provide a block gottx = self.nodes[0].getrawtransaction(tx, True) assert_equal(gottx['txid'], tx) assert 'in_active_chain' not in gottx # We should not get the tx if we provide an unrelated block assert_raises_rpc_error(-5, "No such transaction found", self.nodes[0].getrawtransaction, tx, True, block2) # An invalid block hash should raise the correct errors assert_raises_rpc_error(-1, "JSON value is not a string as expected", self.nodes[0].getrawtransaction, tx, True, True) assert_raises_rpc_error(-8, "parameter 3 must be of length 64 (not 6, for 'foobar')", self.nodes[0].getrawtransaction, tx, True, "foobar") assert_raises_rpc_error(-8, "parameter 3 must be of length 64 (not 8, for 'abcd1234')", self.nodes[0].getrawtransaction, tx, True, "abcd1234") assert_raises_rpc_error(-8, "parameter 3 must be hexadecimal string (not 'ZZZ0000000000000000000000000000000000000000000000000000000000000')", self.nodes[0].getrawtransaction, tx, True, "ZZZ0000000000000000000000000000000000000000000000000000000000000") assert_raises_rpc_error(-5, "Block hash not found", self.nodes[0].getrawtransaction, tx, True, "0000000000000000000000000000000000000000000000000000000000000000") # Undo the blocks and check in_active_chain self.nodes[0].invalidateblock(block1) gottx = self.nodes[0].getrawtransaction(txid=tx, verbose=True, blockhash=block1) assert_equal(gottx['in_active_chain'], False) self.nodes[0].reconsiderblock(block1) assert_equal(self.nodes[0].getbestblockhash(), block2) ######################### # RAW TX MULTISIG TESTS # ######################### # 2of2 test addr1 = self.nodes[2].getnewaddress() addr2 = self.nodes[2].getnewaddress() addr1Obj = self.nodes[2].getaddressinfo(addr1) addr2Obj = self.nodes[2].getaddressinfo(addr2) # Tests for createmultisig and addmultisigaddress assert_raises_rpc_error(-5, "Invalid public key", self.nodes[0].createmultisig, 1, ["01020304"]) self.nodes[0].createmultisig(2, [addr1Obj['pubkey'], addr2Obj['pubkey']]) # createmultisig can only take public keys assert_raises_rpc_error(-5, "Invalid public key", self.nodes[0].createmultisig, 2, [addr1Obj['pubkey'], addr1]) # addmultisigaddress can take both pubkeys and addresses so long as they are in the wallet, which is tested here. mSigObj = self.nodes[2].addmultisigaddress(2, [addr1Obj['pubkey'], addr1])['address'] #use balance deltas instead of absolute values bal = self.nodes[2].getbalance() # send 1.2 BTC to msig adr txId = self.nodes[0].sendtoaddress(mSigObj, 1.2) self.sync_all() self.nodes[0].generate(1) self.sync_all() assert_equal(self.nodes[2].getbalance(), bal+Decimal('1.20000000')) #node2 has both keys of the 2of2 ms addr., tx should affect the balance # 2of3 test from different nodes bal = self.nodes[2].getbalance() addr1 = self.nodes[1].getnewaddress() addr2 = self.nodes[2].getnewaddress() addr3 = self.nodes[2].getnewaddress() addr1Obj = self.nodes[1].getaddressinfo(addr1) addr2Obj = self.nodes[2].getaddressinfo(addr2) addr3Obj = self.nodes[2].getaddressinfo(addr3) mSigObj = self.nodes[2].addmultisigaddress(2, [addr1Obj['pubkey'], addr2Obj['pubkey'], addr3Obj['pubkey']])['address'] txId = self.nodes[0].sendtoaddress(mSigObj, 2.2) decTx = self.nodes[0].gettransaction(txId) rawTx = self.nodes[0].decoderawtransaction(decTx['hex']) self.sync_all() self.nodes[0].generate(1) self.sync_all() #THIS IS AN INCOMPLETE FEATURE #NODE2 HAS TWO OF THREE KEY AND THE FUNDS SHOULD BE SPENDABLE AND COUNT AT BALANCE CALCULATION assert_equal(self.nodes[2].getbalance(), bal) #for now, assume the funds of a 2of3 multisig tx are not marked as spendable txDetails = self.nodes[0].gettransaction(txId, True) rawTx = self.nodes[0].decoderawtransaction(txDetails['hex']) vout = next(o for o in rawTx['vout'] if o['value'] == Decimal('2.20000000')) bal = self.nodes[0].getbalance() inputs = [{ "txid" : txId, "vout" : vout['n'], "scriptPubKey" : vout['scriptPubKey']['hex'], "amount" : vout['value']}] outputs = { self.nodes[0].getnewaddress() : 2.19 } rawTx = self.nodes[2].createrawtransaction(inputs, outputs) rawTxPartialSigned = self.nodes[1].signrawtransactionwithwallet(rawTx, inputs) assert_equal(rawTxPartialSigned['complete'], False) #node1 only has one key, can't comp. sign the tx rawTxSigned = self.nodes[2].signrawtransactionwithwallet(rawTx, inputs) assert_equal(rawTxSigned['complete'], True) #node2 can sign the tx compl., own two of three keys self.nodes[2].sendrawtransaction(rawTxSigned['hex']) rawTx = self.nodes[0].decoderawtransaction(rawTxSigned['hex']) self.sync_all() self.nodes[0].generate(1) self.sync_all() assert_equal(self.nodes[0].getbalance(), bal+Decimal('80.00000000')+Decimal('2.19000000')) #block reward + tx # 2of2 test for combining transactions bal = self.nodes[2].getbalance() addr1 = self.nodes[1].getnewaddress() addr2 = self.nodes[2].getnewaddress() addr1Obj = self.nodes[1].getaddressinfo(addr1) addr2Obj = self.nodes[2].getaddressinfo(addr2) self.nodes[1].addmultisigaddress(2, [addr1Obj['pubkey'], addr2Obj['pubkey']])['address'] mSigObj = self.nodes[2].addmultisigaddress(2, [addr1Obj['pubkey'], addr2Obj['pubkey']])['address'] mSigObjValid = self.nodes[2].getaddressinfo(mSigObj) txId = self.nodes[0].sendtoaddress(mSigObj, 2.2) decTx = self.nodes[0].gettransaction(txId) rawTx2 = self.nodes[0].decoderawtransaction(decTx['hex']) self.sync_all() self.nodes[0].generate(1) self.sync_all() assert_equal(self.nodes[2].getbalance(), bal) # the funds of a 2of2 multisig tx should not be marked as spendable txDetails = self.nodes[0].gettransaction(txId, True) rawTx2 = self.nodes[0].decoderawtransaction(txDetails['hex']) vout = next(o for o in rawTx2['vout'] if o['value'] == Decimal('2.20000000')) bal = self.nodes[0].getbalance() inputs = [{ "txid" : txId, "vout" : vout['n'], "scriptPubKey" : vout['scriptPubKey']['hex'], "redeemScript" : mSigObjValid['hex'], "amount" : vout['value']}] outputs = { self.nodes[0].getnewaddress() : 2.19 } rawTx2 = self.nodes[2].createrawtransaction(inputs, outputs) rawTxPartialSigned1 = self.nodes[1].signrawtransactionwithwallet(rawTx2, inputs) self.log.debug(rawTxPartialSigned1) assert_equal(rawTxPartialSigned1['complete'], False) #node1 only has one key, can't comp. sign the tx rawTxPartialSigned2 = self.nodes[2].signrawtransactionwithwallet(rawTx2, inputs) self.log.debug(rawTxPartialSigned2) assert_equal(rawTxPartialSigned2['complete'], False) #node2 only has one key, can't comp. sign the tx rawTxComb = self.nodes[2].combinerawtransaction([rawTxPartialSigned1['hex'], rawTxPartialSigned2['hex']]) self.log.debug(rawTxComb) self.nodes[2].sendrawtransaction(rawTxComb) rawTx2 = self.nodes[0].decoderawtransaction(rawTxComb) self.sync_all() self.nodes[0].generate(1) self.sync_all() assert_equal(self.nodes[0].getbalance(), bal+Decimal('80.00000000')+Decimal('2.19000000')) #block reward + tx # decoderawtransaction tests # witness transaction encrawtx = "010000000001010000000000000072c1a6a246ae63f74f931e8365e15a089c68d61900000000000000000000ffffffff0100e1f50500000000000102616100000000" decrawtx = self.nodes[0].decoderawtransaction(encrawtx, True) # decode as witness transaction assert_equal(decrawtx['vout'][0]['value'], Decimal('1.00000000')) assert_raises_rpc_error(-22, 'TX decode failed', self.nodes[0].decoderawtransaction, encrawtx, False) # force decode as non-witness transaction # non-witness transaction encrawtx = "01000000010000000000000072c1a6a246ae63f74f931e8365e15a089c68d61900000000000000000000ffffffff0100e1f505000000000000000000" decrawtx = self.nodes[0].decoderawtransaction(encrawtx, False) # decode as non-witness transaction assert_equal(decrawtx['vout'][0]['value'], Decimal('1.00000000')) # getrawtransaction tests # 1. valid parameters - only supply txid txId = rawTx["txid"] assert_equal(self.nodes[0].getrawtransaction(txId), rawTxSigned['hex']) # 2. valid parameters - supply txid and 0 for non-verbose assert_equal(self.nodes[0].getrawtransaction(txId, 0), rawTxSigned['hex']) # 3. valid parameters - supply txid and False for non-verbose assert_equal(self.nodes[0].getrawtransaction(txId, False), rawTxSigned['hex']) # 4. valid parameters - supply txid and 1 for verbose. # We only check the "hex" field of the output so we don't need to update this test every time the output format changes. assert_equal(self.nodes[0].getrawtransaction(txId, 1)["hex"], rawTxSigned['hex']) # 5. valid parameters - supply txid and True for non-verbose assert_equal(self.nodes[0].getrawtransaction(txId, True)["hex"], rawTxSigned['hex']) # 6. invalid parameters - supply txid and string "Flase" assert_raises_rpc_error(-1, "not a boolean", self.nodes[0].getrawtransaction, txId, "Flase") # 7. invalid parameters - supply txid and empty array assert_raises_rpc_error(-1, "not a boolean", self.nodes[0].getrawtransaction, txId, []) # 8. invalid parameters - supply txid and empty dict assert_raises_rpc_error(-1, "not a boolean", self.nodes[0].getrawtransaction, txId, {}) inputs = [ {'txid' : "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout' : 1, 'sequence' : 1000}] outputs = { self.nodes[0].getnewaddress() : 1 } rawtx = self.nodes[0].createrawtransaction(inputs, outputs) decrawtx= self.nodes[0].decoderawtransaction(rawtx) assert_equal(decrawtx['vin'][0]['sequence'], 1000) # 9. invalid parameters - sequence number out of range inputs = [ {'txid' : "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout' : 1, 'sequence' : -1}] outputs = { self.nodes[0].getnewaddress() : 1 } assert_raises_rpc_error(-8, 'Invalid parameter, sequence number is out of range', self.nodes[0].createrawtransaction, inputs, outputs) # 10. invalid parameters - sequence number out of range inputs = [ {'txid' : "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout' : 1, 'sequence' : 4294967296}] outputs = { self.nodes[0].getnewaddress() : 1 } assert_raises_rpc_error(-8, 'Invalid parameter, sequence number is out of range', self.nodes[0].createrawtransaction, inputs, outputs) inputs = [ {'txid' : "1d1d4e24ed99057e84c3f80fd8fbec79ed9e1acee37da269356ecea000000000", 'vout' : 1, 'sequence' : 4294967294}] outputs = { self.nodes[0].getnewaddress() : 1 } rawtx = self.nodes[0].createrawtransaction(inputs, outputs) decrawtx= self.nodes[0].decoderawtransaction(rawtx) assert_equal(decrawtx['vin'][0]['sequence'], 4294967294) #################################### # TRANSACTION VERSION NUMBER TESTS # #################################### # Test the minimum transaction version number that fits in a signed 32-bit integer. tx = CTransaction() tx.nVersion = -0x80000000 rawtx = ToHex(tx) decrawtx = self.nodes[0].decoderawtransaction(rawtx) assert_equal(decrawtx['version'], -0x80000000) # Test the maximum transaction version number that fits in a signed 32-bit integer. tx = CTransaction() tx.nVersion = 0x7fffffff rawtx = ToHex(tx) decrawtx = self.nodes[0].decoderawtransaction(rawtx) assert_equal(decrawtx['version'], 0x7fffffff) self.log.info('sendrawtransaction/testmempoolaccept with maxfeerate') # Test a transaction with a small fee. txId = self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 1.0) rawTx = self.nodes[0].getrawtransaction(txId, True) vout = next(o for o in rawTx['vout'] if o['value'] == Decimal('1.00000000')) self.sync_all() inputs = [{ "txid" : txId, "vout" : vout['n'] }] # Fee 10,000 satoshis, (1 - (10000 sat * 0.00000001 BTC/sat)) = 0.9999 outputs = { self.nodes[0].getnewaddress() : Decimal("0.99990000") } rawTx = self.nodes[2].createrawtransaction(inputs, outputs) rawTxSigned = self.nodes[2].signrawtransactionwithwallet(rawTx) assert_equal(rawTxSigned['complete'], True) # Fee 10,000 satoshis, ~100 b transaction, fee rate should land around 100 sat/byte = 0.00100000 BTC/kB # Thus, testmempoolaccept should reject testres = self.nodes[2].testmempoolaccept([rawTxSigned['hex']], 0.00001000)[0] assert_equal(testres['allowed'], False) assert_equal(testres['reject-reason'], '256: absurdly-high-fee') # and sendrawtransaction should throw assert_raises_rpc_error(-26, "absurdly-high-fee", self.nodes[2].sendrawtransaction, rawTxSigned['hex'], 0.00001000) # and the following calls should both succeed testres = self.nodes[2].testmempoolaccept(rawtxs=[rawTxSigned['hex']])[0] assert_equal(testres['allowed'], True) self.nodes[2].sendrawtransaction(hexstring=rawTxSigned['hex']) # Test a transaction with a large fee. txId = self.nodes[0].sendtoaddress(self.nodes[2].getnewaddress(), 1.0) rawTx = self.nodes[0].getrawtransaction(txId, True) vout = next(o for o in rawTx['vout'] if o['value'] == Decimal('1.00000000')) self.sync_all() inputs = [{ "txid" : txId, "vout" : vout['n'] }] # Fee 2,000,000 satoshis, (1 - (2000000 sat * 0.00000001 BTC/sat)) = 0.98 outputs = { self.nodes[0].getnewaddress() : Decimal("0.98000000") } rawTx = self.nodes[2].createrawtransaction(inputs, outputs) rawTxSigned = self.nodes[2].signrawtransactionwithwallet(rawTx) assert_equal(rawTxSigned['complete'], True) # Fee 2,000,000 satoshis, ~100 b transaction, fee rate should land around 20,000 sat/byte = 0.20000000 BTC/kB # Thus, testmempoolaccept should reject testres = self.nodes[2].testmempoolaccept([rawTxSigned['hex']])[0] assert_equal(testres['allowed'], False) assert_equal(testres['reject-reason'], '256: absurdly-high-fee') # and sendrawtransaction should throw assert_raises_rpc_error(-26, "absurdly-high-fee", self.nodes[2].sendrawtransaction, rawTxSigned['hex']) # and the following calls should both succeed testres = self.nodes[2].testmempoolaccept(rawtxs=[rawTxSigned['hex']], maxfeerate='0.20000000')[0] assert_equal(testres['allowed'], True) self.nodes[2].sendrawtransaction(hexstring=rawTxSigned['hex'], maxfeerate='0.20000000')
1
def run_command_mock(mocker: MockerFixture) -> AsyncMock: """Patch ``gitingest.clone.run_command`` with an ``AsyncMock``. The mocked function returns a dummy process whose ``communicate`` method yields generic ``stdout`` / ``stderr`` bytes. Tests can still access / tweak the mock via the fixture argument. """ mock_exec = mocker.patch("gitingest.clone.run_command", new_callable=AsyncMock) # Provide a default dummy process so most tests don't have to create one. dummy_process = AsyncMock() dummy_process.communicate.return_value = (b"output", b"error") mock_exec.return_value = dummy_process return mock_exec
0
def callback(d): preview = d['denoised'] preview = vae_decode_fake(preview) preview = (preview * 255.0).detach().cpu().numpy().clip(0, 255).astype(np.uint8) preview = einops.rearrange(preview, 'b c t h w -> (b h) (t w) c') if stream.input_queue.top() == 'end': stream.output_queue.push(('end', None)) raise KeyboardInterrupt('User ends the task.') current_step = d['i'] + 1 percentage = int(100.0 * current_step / steps) hint = f'Sampling {current_step}/{steps}' desc = f'Total generated frames: {int(max(0, total_generated_latent_frames * 4 - 3))}, Video length: {max(0, (total_generated_latent_frames * 4 - 3) / 30) :.2f} seconds (FPS-30). The video is being extended now ...' stream.output_queue.push(('progress', (preview, desc, make_progress_bar_html(percentage, hint)))) return
0
def partition(self): return Partition(label=self.label, files=self.files)
1
def _load_template(self, template): pyboleto_dir = os.path.dirname(os.path.abspath(__file__)) template_path = os.path.join(pyboleto_dir, 'templates', template) with open(template_path, 'r') as tpl: template_content = tpl.read() return template_content
1
def extract_index(s): return int(index_capture.match(s).groups()[0])
1
def __matmul__(self, other): if isinstance(other, Quaternion): return self.q.__matmul__(other.q) return self.__matmul__(self.__class__(other))
1
def fetch_order_book(self, symbol, limit=None, params={}): self.load_markets() market = self.market(symbol) method = 'publicGet' request = { 'symbol': market['id'], } if limit is not None: request['size'] = limit if market['future']: method += 'Future' request['contract_type'] = 'this_week' # next_week, quarter method += 'Depth' orderbook = getattr(self, method)(self.extend(request, params)) return self.parse_order_book(orderbook)
1
def test_calculate_samples_count_per_callchain(self): counters = ipr.calculate_samples_count_per_callchain([ ["foo", "BytecodeHandler:bar"], ["foo", "BytecodeHandler:bar"], ["beep", "BytecodeHandler:bar"], ]) self.assertItemsEqual(counters, [ ('BytecodeHandler:bar;foo', 2), ('BytecodeHandler:bar;beep', 1), ])
1
def __len__(self): """ Return the number of fields in the dataclass. Returns: int: The number of fields in the dataclass. """ return len(fields(self))
0
def _factory(branches: list[str]) -> None: mocker.patch( "gitingest.utils.git_utils.run_command", new_callable=AsyncMock, return_value=("\n".join(f"refs/heads/{b}" for b in branches).encode() + b"\n", b""), ) mocker.patch( "gitingest.utils.git_utils.fetch_remote_branches_or_tags", new_callable=AsyncMock, return_value=branches, )
0
def test_iterqueue(self, n='test_iterqueue'): i = [0] class C(compat.Consumer): def fetch(self, limit=None): z = i[0] i[0] += 1 return z c = C(self.connection, queue=n, exchange=n, routing_key='rkey') assert list(c.iterqueue(limit=10)) == list(range(10)) c.close()
1
def __imul__(self, other): return self * other
1
End of preview.
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Dataset

Overview

The Python Function Benchmark serves as a real-world evaluation dataset for membership inference attacks on code LLMs, specifically targeting models pretrained on datasets like the Pile (e.g., Pythia, GPT-Neo, StableLM).

The dataset contains training (member) data and non-training (non-member):

  • Member data includes 1,000 Python functions sampled from the Pile dataset (released in 2021). To ensure a diverse sample, we systematically selected the first 10 functions from every 100 consecutive entries in the Pile, resulting in a total of 1,000 member functions.

  • Non-member data includes 1,000 Python functions extracted from 100 GitHub repositories created after January 1, 2024 (all four evaluated LLMs had been released prior to this date). To ensure repository quality, we sorted repositories by star count in descending order and extracted 10 Python functions from each repository in order. To verify that these functions were genuinely original and not cloned from pre-existing sources, we implemented a rigorous verification process: we parsed each candidate function's code using Python's ast module to extract its name, variable names, and function calls, then used these elements to build search queries for the GitHub API. The verification employed three heuristics: (1) searching for the exact function name to identify direct duplicates; (2) searching by internal variable names to detect refactored code reuse; and (3) searching for the complete string of function calls to find logic similarities. Two authors conducted peer reviews on the search results to ensure all 1,000 functions were original and created after January 2024.

The benchmark includes 214 non-member function files (some repositories contributed multiple files) with an average of 25.34 lines of code (LOC). For member functions, file counts are unavailable as this information was not provided in the Pile dataset.

The benchmark supports evaluation under varied member-to-non-member ratios (e.g., 1:1, 1:5, 5:1) and includes statistics on syntax conventions (e.g., 38.4% of tokens are syntax-related across categories like data models and expressions).

If you find this work helpful, please consider citing our paper:

@misc{li2025synprune,
    title={Uncovering Pretraining Code in LLMs: A Syntax-Aware Attribution Approach},
    author={Yuanheng Li and Zhuoyang Chen and Xiaoyun Liu and Yuhao Wang and Mingwei Liu and Yang Shi and Kaifeng Huang and Shengjie Zhao},
    year={2025},
    eprint={2511.07033},
    archivePrefix={arXiv},
    primaryClass={cs.CR}
}

divide.py

divide.py is a script designed to split a JSONL file into two separate files based on the approximate token count of a specified text field. It detects the appropriate text field from the input JSONL and uses the median token count as a threshold to categorize the entries into "short" and "long".

Usage

To use divide.py, run the following command in your terminal:

python divide.py --input <input_jsonl_path> --short_out <output_short_jsonl_path> --long_out <output_long_jsonl_path>
  • --input: Path to the input JSONL file (required).
  • --short_out: Path to the output JSONL file for short entries (default: short.jsonl).
  • --long_out: Path to the output JSONL file for long entries (default: long.jsonl).

ratio.py

ratio.py is a script that creates datasets with specified positive and negative sample ratios from two JSONL files containing positive and negative samples. It randomly samples from the provided datasets to create a new dataset based on the defined configuration.

Usage

To use ratio.py, simply run the script:

python ratio.py

This script will read from positive/positive.jsonl and negative/negative.jsonl, and create datasets based on the configurations defined in the script. The output files will be named dataset_{name}.jsonl for each configuration.

Dataset Configurations

The following configurations are available in the script:

  • 1_1: 2000 total samples with a 1:1 positive to negative ratio.
  • 1_5: 1200 total samples with a 1:5 positive to negative ratio.
  • 5_1: 1200 total samples with a 5:1 positive to negative ratio.

extract_members.py

extract_members.py is a script that extracts members and non-members from a JSONL file based on the label field. It reads from python_sample.jsonl, where a label of 1 indicates a member and a label of 0 indicates a non-member. The script outputs two separate JSONL files: one for members and one for non-members.

Usage

To use extract_members.py, run the following command in your terminal:

python extract_members.py

This script will read from dataset/python_sample.jsonl and create the following output files:

  • dataset/member.jsonl: Contains all entries with label equal to 1.
  • dataset/non-member.jsonl: Contains all entries with label equal to 0.

Output

After running the script, you will see a message indicating the number of extracted members and non-members.

Downloads last month
11