Jump to content

Dead or Alive 6 - Modding Thread and Discussion


Recommended Posts

7 hours ago, Bowjobs said:

--sorry, i mean models i'm using 2.79 blender

Sorry but you can't open the part of the costumes that has physics in blender (with some exceptions like Helena C123 and more) If u want it for modding, sorry u cant mod physics. If u want it for anything else try to use Noesis, it allows you to fully import the model and export it as FBX 

Link to comment

its been over month no mods page is almost dead like doa 6 very sad , now people started doa porn damn , only one man anry huchi001 making mods sadly he is selling and i do not buy mods , i hope we get some modder who release some mods , when i visit this page is see nothing posted over months ,, only mods make doa 6 special  

Link to comment
2 hours ago, DOA RAJU said:

its been over month no mods page is almost dead like doa 6 very sad , now people started doa porn damn , only one man anry huchi001 making mods sadly he is selling and i do not buy mods , i hope we get some modder who release some mods , when i visit this page is see nothing posted over months ,, only mods make doa 6 special  

Since 16/12 (a month), 21 mods have been published in the Devianart mod folder, most of them are packs that include more than one :D

Go check them! 

https://www.deviantart.com/streetmodders/gallery/69037284/dead-or-alive-6-mods?

Link to comment
On 1/15/2021 at 10:17 PM, Alexei81 said:

阁下分享的4个视频是短视频合集,点击进入很像常见的下载网页,但是不知道哪个是下载按钮?因为网页是俄语,经过翻译也看不到下载的按钮。我是采取有非常手段的浏览器解析才下载的。希望你能指导一下你的作品的下载方式。

Link to comment
On 10/8/2019 at 1:46 AM, h4sjohnson said:

IMPROVED SOFT .PY SCRIPT

 

Hey gentlement, i'm bring a modified version of ausgeek's decode_doa6_soft.py. I fixed some mathematic mistakes, this srcipt should help you apply your modification in softbody without fixing the soft body.

When the ausgeek's code find some vertices not surrounded by 8 soft particles, most of the time it just throw errors. Even if every thing is fine, there could be mathematic mistake and mess up the soft skinning.

Instead, in this modified version, when a vertex not surrounded by 8 soft particles, it would just calculate a ”approximate“ skinning result. It works very well during my tests. I also let it select the vertices not surrounded by 8 soft particles, hoping that could help OCD-modders to fix any flaw. 

If you have any problem or detected any mistakes, plz share them to me.

thanks: @ausgeek

decode_doa6_soft.py 35.02 kB · 459 downloads

 

For preview , here's a simple mod, It use SaafRats chest and body to replace the original meshes, without fixing a single vertex manually. (it's the no-panties version!)

HON_COS_003.g1m 2.3 MB · 225 downloads

 

 

Thank you for these script fixes @h4sjohnson. Thank you for original scripts @ausgeek.

 

I made some additional edits to decode_doa6_soft.py to get it working on Blender 2.91. I borrowed some code from the Blender 2.80 migration for blender_3dmigoto.py: https://github.com/DarkStarSword/3d-fixes/commits/master/blender_3dmigoto.py

 

I have not tested extensively, but the imported objects appear correct, meaning look the same, when compared to the original script in Blender 2.79.

 

Apologies for the inline code. I don't seem to have permission to upload files.

 

decode_doa6_soft.py:

 

Spoiler

#!/usr/bin/env python3

bl_info = {
    "name": "DOA6 Soft Body",
    "blender": (2, 80, 0),
    "author": "Ian Munsie (darkstarsword@gmail.com)",
    "location": "File > Import-Export",
    "description": "Work with DOA6 soft body meshes",
    "category": "Import-Export",
    "tracker_url": "https://github.com/DarkStarSword/3d-fixes/issues",
}

import os, struct, sys, numpy, io, copy, itertools, math, collections, json, argparse, glob

try:
    import bpy
except ImportError as e:
    print('Running standalone - decoding only, no Blender integration')
else:
    import bpy_extras

    if bpy.app.version >= (2, 80):
        import_menu = bpy.types.TOPBAR_MT_file_import
    else:
        import_menu = bpy.types.INFO_MT_file_import

# https://theduckcow.com/2019/update-addons-both-blender-28-and-27-support/
def make_annotations(cls):
    """Converts class fields to annotations if running with Blender 2.8"""
    if bpy.app.version < (2, 80):
        return cls
    bl_props = {k: v for k, v in cls.__dict__.items() if isinstance(v, tuple)}
    if bl_props:
        if '__annotations__' not in cls.__dict__:
            setattr(cls, '__annotations__', {})
        annotations = cls.__dict__['__annotations__']
        for k, v in bl_props.items():
            annotations[k] = v
            delattr(cls, k)
    return cls

def select_set(object, state):
    """Multi version compatibility for setting object selection"""
    if hasattr(object, "select_set"):
        object.select_set(state)
    else:
        object.select = state

def get_active_object(context):
    """Get the active object in a 2.7 and 2.8 compatible way"""
    if hasattr(context, "view_layer"):
        return context.view_layer.objects.active
    else:
        return context.scene.objects.active

def link_object_to_scene(context, obj):
    if hasattr(context.scene, "collection"): # Blender 2.80
        context.scene.collection.objects.link(obj)
    else: # Blender 2.79
        context.scene.objects.link(obj)

############## End Blender 2.7 / 2.8 compatibility wrappers ##############

class Fatal(Exception): pass

numpy.set_printoptions(suppress = True,
        formatter = {
            #'int': lambda x : '%08x' % x,
            'float': lambda x: '%.2f' % x
        },
        edgeitems = numpy.inf)

region_unk_header = numpy.dtype([
    ('u10', numpy.float32, 8),
    ('u11', numpy.float32, 3),
    ('u12', numpy.float32, 3),
    ('u13', numpy.uint32, 1),
    ('u14', numpy.float32, 1),
    ('u15', numpy.float32, 2),
    ('u16', numpy.float32, 6),
])

node_fmt = numpy.dtype([
    ('id', numpy.uint32, 1),
    ('pos', numpy.float32, 3),
    ('rot', numpy.float32, 3), # Maybe
    ('0x43', numpy.uint32, 1),
    ('b', numpy.uint8, 4),
    ('links', numpy.uint32, 1),
])

verbosity = 0
def pr_verbose(*args, **kwargs):
    if verbosity >= 1:
        print(*args, **kwargs)

def decode_node(f, region_obj):
    buf = f.read(10*4)
    node, = numpy.frombuffer(buf, node_fmt)
    pr_verbose('Node', node['id'], node)
    #assert(node['0x43'] == 0x43)

    # Other nodes this one influences and/or is influenced by:
    for i in range(node['links'] + 1):
        data = struct.unpack('<If', f.read(2*4))
        pr_verbose('  Link %i: %.2f' % data)

    data = struct.unpack('<3f3I', f.read(6*4))
    #assert(data == (0,)*6)
    assert(data[3:] == (0,)*3)
    if data != (0,)*6:
        pr_verbose(' ', data[:3])

    if region_obj:
        # Could use other representations or even soft body within Blender, but
        # since we are only after the node positions let's keep it simple and
        # represent each soft body node with a cube:
        if bpy.app.version >= (2, 80):
            bpy.ops.mesh.primitive_cube_add(size=0.5, location=node['pos'], rotation=node['rot'])
        else:
            bpy.ops.mesh.primitive_cube_add(radius=0.25, location=node['pos'], rotation=node['rot'])

        active_object = get_active_object(bpy.context)
        active_object.name = '%s[%u]' % (region_obj.name, node['id'])
        active_object.lock_location = (True, True, True)
        active_object.lock_rotation = (True, True, True)
        active_object.parent = region_obj

def decode_soft_node_region(f):
    header = struct.unpack('<13I', f.read(13*4))
    (id, len1, z2, z3, u4, len2, len3, root_bone_idx, u6, u7, z8, o9, len4) = header
    pr_verbose('Soft region header', header)

    # Assertions to catch any variants we haven't seen before:
    #assert(header == (0, 217, 0, 0, 9, 217, 58, 100, 1, 3, 0, 1, 217)) # len: 21836, pt2 len: 401*4 (217 + 9 + 1 + 3*58 ?), pt3 len: 494*4
    #assert(header == (1, 217, 0, 0, 9, 217, 58, 101, 1, 3, 0, 1, 217)) # len: 21836, pt2 len: 401*4 (217 + 9 + 1 + 3*58 ?), pt3 len: 494*4
    #assert(header == (4, 104, 0, 0, 9, 104, 58, 102, 0, 1, 0, 1, 104)) # len: 12900, pt2 len: 287*4 (104 + 9 + 0 + 3*58 ?), pt3 len: 268*4
    #assert(header == (5, 104, 0, 0, 9, 104, 70, 103, 0, 1, 0, 1, 104)) # len: 13044, pt2 len: 323*4 (104 + 9 + 0 + 3*70 ?), pt3 len: 268*4
    assert(z2 == 0)
    assert(z3 == 0)
    assert(u6 in (0, 1))
    assert(u7 in (1, 3))
    assert(z8 == 0)
    assert(o9 == 1)
    assert(len1 == len2 == len4)

    # Probably just part of the same header. Might define the bounding box and
    # so on, though doesn't quite look right compared to what I believe are the
    # node positions, so unsure. Splitting this from the above read mostly to
    # use the numpy formatter.
    buf = f.read(24*4)
    unknown, = numpy.frombuffer(buf, region_unk_header)
    pr_verbose('Soft region unknown', unknown)

    region_obj = None
    if 'bpy' in globals():
        name = '%s.SOFT[%u]' % (os.path.basename(f.name), id)
        region_obj = bpy.data.objects.new(name, None)
        axis_forward = '-Z'; axis_up = 'Y'; # FIXME: use orientation_helper_factory
        conversion_matrix = bpy_extras.io_utils.axis_conversion(from_forward=axis_forward, from_up=axis_up).to_4x4()
        region_obj.matrix_world = conversion_matrix
        link_object_to_scene(bpy.context, region_obj)

    for i in range(len1):
        decode_node(f, region_obj)

    # Next follows several lists of node IDs. The length of each list seems to
    # be from various fields in the header, but the contents of the individual
    # lists doesn't matter so much to us so I haven't confirmed that the lists
    # are actually in this order so they might be mixed up (but looks right):
    pr_verbose(numpy.frombuffer(f.read(u4   * 4), numpy.uint32))
    pr_verbose(numpy.frombuffer(f.read(len1 * 4), numpy.uint32))
    pr_verbose(numpy.frombuffer(f.read(u6   * 4), numpy.uint32))
    pr_verbose(numpy.frombuffer(f.read(len3 * 4 * 3), numpy.dtype([('node', numpy.uint32, 3)])))

    # Next follows a list of floats. Conveniently it gives us the section
    # length in bytes that we can skip over:
    (_6, len5) = struct.unpack('<2I', f.read(8))
    pr_verbose(_6)
    #assert(_6 == 6) # KOK_COS_004.g1m has 7
    pr_verbose(numpy.frombuffer(f.read(len5 - 8), numpy.float32))

def decode_soft_node_regions(f):
    num_regions, = struct.unpack('<I', f.read(4))
    for i in range(num_regions):
        decode_soft_node_region(f)
        pr_verbose()

    assert(not f.read())

def print_unknown(name, buf):
    orig_opts = numpy.get_printoptions()
    opts = copy.deepcopy(orig_opts)
    opts['formatter']['int'] = lambda x : '%08x' % x
    numpy.set_printoptions(**opts)

    pr_verbose(name)
    pr_verbose(numpy.frombuffer(buf, numpy.uint32))

    numpy.set_printoptions(**orig_opts)

def dump_unknown_section(f, *args):
    print_unknown('Unknown section:', f.read())

decode_soft_section = {
    0x80001: decode_soft_node_regions,
    0x80002: dump_unknown_section,
}

def io_range(f, len):
    b = io.BytesIO(f.read(len))
    b.name = f.name
    return b

class G1MChunk(object):
    def __init__(self, f, version, g1m):
        self.orig_val = f.getvalue()
        self.version = version
        self.g1m = g1m

    def getvalue(self):
        return self.orig_val

class DumpUnknownG1MChunk(G1MChunk):
    def __init__(self, f, version, g1m):
        G1MChunk.__init__(self, f, version, g1m)
        print_unknown('Unknown section:', f.read())

class SOFTChunk(G1MChunk):
    def __init__(self, f, version, g1m):
        assert(version == b'5100')
        G1MChunk.__init__(self, f, version, g1m)
        num_sections, = struct.unpack('<I', f.read(4))
        for i in range(num_sections):
            section_type, section_len = struct.unpack('<2I', f.read(8))
            decode_soft_section[section_type](io_range(f, section_len - 8))

        assert(not f.read())

class G1MGSection(object):
    def __init__(self, f, g1m, g1mg):
        self.orig_val = f.getvalue()
        self.g1mg = g1mg
        self.g1m = g1m

    def getvalue(self):
        return self.orig_val

class G1MGBoneMap(G1MGSection):
    def __init__(self, f, g1m, g1mg):
        G1MGSection.__init__(self, f, g1m, g1mg)
        num_maps, = struct.unpack('<I', f.read(4))
        pr_verbose('Num bone maps:', num_maps)
        dtype = numpy.dtype([
            ('id', numpy.uint32, 1), # I think this is a unique ID for the bone ->
                                     # vg mapping. Each unique mapping in the file
                                     # gets an index starting at 0 and
                                     # incrementing by 1 each time. If a mapping
                                     # is repeated from an earlier sub-mesh it
                                     # will have the same ID.
            ('zero', numpy.uint32, 1),
            ('bone', numpy.uint32, 1),
        ])
        g1mg.bone_maps = collections.OrderedDict()
        g1m.import_oid()
        for i in range(num_maps):
            num_maps, = struct.unpack('<I', f.read(4))
            data = numpy.frombuffer(f.read(num_maps * 4 * 3), dtype)
            pr_verbose('Map %i, len %i:' % (i, len(data)))
            vgmap = collections.OrderedDict()
            for vg,d in enumerate(data):
                try:
                    bone_name = g1m.oid_map[list(g1m.chunks[b'G1MS'].indices).index(d['bone'])]
                except:
                    bone_name = 'UnnamedBone#%d' % d['bone']
                pr_verbose(
                        '  VG:', vg*3,
                        'BoneID:', d['bone'], repr(bone_name),
                        'MapID:', d['id'],
                        'Unknown:', d['zero'],
                )
                vgmap[bone_name] = vg*3
            g1mg.bone_maps[i] = vgmap
            pr_verbose()

        assert(not f.read())

    def getvalue(self):
        f = io.BytesIO()
        f.write(struct.pack('<I', len(self.g1mg.bone_maps)))
        self.g1m.import_oid()
        reverse_oid_map = self.g1m.oid_map.reverse()
        bone_map_ids = {}
        for vgmap in self.g1mg.bone_maps.values():
            f.write(struct.pack('<I', len(vgmap)))
            for i, (bone_name, vg) in enumerate(vgmap.items()):
                assert(i*3 == vg)
                if bone_name.startswith('UnnamedBone#'):
                    bone_id = int(bone_name.partition('#')[2])
                else:
                    bone_id = self.g1m.chunks[b'G1MS'].indices[reverse_oid_map[bone_name]]
                map_id = bone_map_ids.setdefault(bone_id, len(bone_map_ids))
                f.write(struct.pack('<3I', map_id, 0, bone_id))

        return f.getvalue()

class G1MGSurfaceMap(G1MGSection):
    def __init__(self, f, g1m, g1mg):
        G1MGSection.__init__(self, f, g1m, g1mg)

        SurfaceMap = numpy.dtype([
            ('u0', numpy.uint32, 2),
            ('bone_map', numpy.uint32, 1),
            ('u1', numpy.uint32, 11),
        ])

        num_maps, = struct.unpack('<I', f.read(4))

        g1mg.surface_maps = numpy.frombuffer(f.read(SurfaceMap.itemsize * num_maps), SurfaceMap)
        pr_verbose('Surfaces:\n', g1mg.surface_maps)
        pr_verbose()

        assert(not f.read())

    def getvalue(self):
        surface_maps = self.g1mg.surface_maps
        return struct.pack('<I', len(surface_maps)) + surface_maps.tobytes()

class OIDMap(dict):
    def __init__(self, f):
        for l in f:
            if l.startswith(';'):
                continue
            id, _, name = l.rstrip().partition(',')
            if id and name:
                self[int(id)] = name

    def reverse(self):
        return dict(map(reversed, self.items()))

def align(file, alignment):
    off = file.tell()
    mod = off % alignment
    if mod == 0:
        return
    file.seek(alignment - mod, 1)

class G1MFChunk(G1MChunk):
    dtype = numpy.dtype([
        ('u0', numpy.uint32, 13),
        ('num_bone_maps', numpy.uint32, 1),
        ('num_individual_bone_maps', numpy.uint32, 1),
        ('u1', numpy.uint32, 58),
    ])

    def __init__(self, f, version, g1m):
        assert(version == b'9200')
        G1MChunk.__init__(self, f, version, g1m)
        self.data, = numpy.frombuffer(f.read(self.dtype.itemsize), self.dtype)
        pr_verbose(self.data)

        assert(not f.read())

    def getvalue(self):
        f = io.BytesIO()
        f.write(self.data['u0'].tobytes())
        f.write(struct.pack('<I', len(self.g1m.chunks[b'G1MG'].bone_maps)))
        f.write(struct.pack('<I', sum(map(len,self.g1m.chunks[b'G1MG'].bone_maps.values()))))
        f.write(self.data['u1'].tobytes())
        return f.getvalue()

class G1MSChunk(G1MChunk):
    def __init__(self, f, version, g1m):
        assert(version == b'2300')
        G1MChunk.__init__(self, f, version, g1m)
        header = struct.unpack('<2I4H', f.read(16))
        pr_verbose(header)
        (bones_offset, unk_10, num_bones, num_indices, num_parents, unk_1A) = header

        self.indices = numpy.frombuffer(f.read(num_indices * 2), numpy.int16)
        pr_verbose(self.indices)
        self.parents = numpy.frombuffer(f.read(num_parents * 2), numpy.int16)
        pr_verbose(self.parents)

        align(f, 4)
        assert(f.tell() == bones_offset-12)
        self.bones_raw = f.read()
        #print_unknown('Bones:', self.bones_raw)

class G1MGChunk(G1MChunk):
    decode_g1mg_section = {
            # 0x10001: dump_unknown_section,
            0x10006: G1MGBoneMap,
            0x10008: G1MGSurfaceMap,
            # 0x10009: dump_unknown_section,
    }

    def __init__(self, f, version, g1m):
        assert(version == b'4400')
        G1MChunk.__init__(self, f, version, g1m)
        self.header = struct.unpack('<4sI6fI', f.read(36))
        pr_verbose(self.header)
        (platform, unk_10, min_x, min_y, min_z, max_x, max_y, max_z, num_sections) = self.header
        assert(platform == b'DX11')
        self.chunks = collections.OrderedDict()
        for i in range(num_sections):
            section_type, section_len = struct.unpack('<2I', f.read(8))
            pr_verbose(hex(section_type))
            buf = io_range(f, section_len - 8)
            if section_type in self.decode_g1mg_section:
                self.chunks[section_type] = \
                    self.decode_g1mg_section[section_type](buf, g1m, self)
            else:
                self.chunks[section_type] = G1MGSection(buf, g1m, self)

    def getvalue(self):
        f = io.BytesIO()
        f.write(struct.pack('<4sI6fI', *self.header))
        for section_id, chunk in self.chunks.items():
            buf = chunk.getvalue()
            f.write(struct.pack('<2I', section_id, len(buf) + 8))
            f.write(buf)
        return f.getvalue()

class G1MFile(object):
    chunk_decoders = {
        b'SOFT': SOFTChunk,
        b'G1MS': G1MSChunk,
        b'G1MG': G1MGChunk,
        b'G1MF': G1MFChunk,
        #b'G1MF': DumpUnknownG1MChunk,
    }

    G1MHeader = numpy.dtype([
        ('signature', numpy.character, 4),
        ('version', numpy.character, 4),
        ('file_size', numpy.uint32, 1),
        ('header_size', numpy.uint32, 1),
        ('u10', numpy.uint32, 1),
        ('num_chunks', numpy.uint32, 1),
    ])

    def __init__(self, f, decode_chunks):
        self.name = f.name
        self.oid_map = None
        self.header, = numpy.frombuffer(f.read(24), self.G1MHeader)
        pr_verbose(self.header)
        (eyecatcher, version, file_size, header_size, u10, chunks) = self.header
        assert(bytes(reversed(eyecatcher)) == b'G1M_')
        assert(version == b'7300')

        f.seek(header_size)
        self.chunks = collections.OrderedDict()
        for i in range(chunks):
            eyecatcher, chunk_version, chunk_size = struct.unpack('<4s4sI', f.read(12))
            eyecatcher = bytes(reversed(eyecatcher))
            pr_verbose(eyecatcher, chunk_version)
            buf = io_range(f, chunk_size - 12)
            if eyecatcher in self.chunk_decoders and (not decode_chunks or eyecatcher in decode_chunks):
                self.chunks[eyecatcher] = \
                    self.chunk_decoders[eyecatcher](buf, chunk_version, self)
            else:
                self.chunks[eyecatcher] = G1MChunk(buf, chunk_version, self)

    def write(self, f):
        f.write(self.header.tobytes())

        for eyecatcher, chunk in self.chunks.items():
            buf = chunk.getvalue()
            f.write(struct.pack('<4s4sI', bytes(reversed(eyecatcher)), chunk.version, len(buf) + 12))
            f.write(buf)

        file_size = f.tell()
        f.seek(8)
        f.write(struct.pack('<I', file_size))

    def import_oid(self):
        if self.oid_map is not None:
            return

        try:
            oidfilename,ext = os.path.splitext(self.name)
            while ext and ext.lower() != '.g1m':
                oidfilename,ext = os.path.splitext(oidfilename)
            oidf = open(oidfilename + '.oid', 'r')
        except OSError as e:
            print('Cannot open %s: %s' % (oidfilename, str(e)))
        else:
            self.oid_map = OIDMap(oidf)
            print('Loaded Object ID map')
            #pr_verbose(self.oid_map)

    def export_vgmaps(self, print=print):
        G1MG = self.chunks[b'G1MG']
        dir = os.path.splitext(self.name)[0]
        print('Exporting %i vertex group maps' % len(G1MG.surface_maps))
        for i, surface_map in enumerate(G1MG.surface_maps):
            vgmap = G1MG.bone_maps[surface_map['bone_map']]
            path = os.path.join(dir, '%d.vgmap' % i)
            try:
                json.dump(vgmap, open(path, 'w'), indent=2)
            except Exception as e:
                print('Unable to dump vertex group mapping:', str(e))
            else:
                print('Exported', path)

    def import_vgmaps(self, print=print):
        G1MG = self.chunks[b'G1MG']
        dir = os.path.splitext(self.name)[0]
        # Remove write protection:
        G1MG.surface_maps = G1MG.surface_maps.copy()
        for filename in glob.glob(os.path.join(dir, '*.vgmap')):
            basename, ext = os.path.splitext(os.path.basename(filename))
            if not basename.isdecimal():
                continue
            surface = int(basename)
            if surface >= len(G1MG.surface_maps):
                print('%s is out of range' % filename)
                continue
            vgmap = json.load(open(filename, 'r'))

            bone_map_idx = surface
            G1MG.surface_maps[surface]['bone_map'] = bone_map_idx
            G1MG.bone_maps[bone_map_idx] = vgmap
            print('Imported %s as bone map %i...' % (filename, bone_map_idx))

def parse_args():
    global verbosity

    parser = argparse.ArgumentParser(description = 'DOA6 g1m Tool')
    parser.add_argument('files', nargs='*',
            help='List of g1m files to parse')
    parser.add_argument('--export-vgmap', action='store_true',
            help='Extract vertex group maps from g1m file')
    parser.add_argument('--import-vgmap', action='store_true',
            help='Import vertex group maps to g1m file')
    parser.add_argument('--test', action='store_true',
            help='Verify importing & exporting a g1m file')
    parser.add_argument('--verbose', '-v', action='count', default=0,
            help='Level of verbosity')
    args = parser.parse_args()

    sections = set()
    if args.export_vgmap:
        sections = sections.union({b'G1MS', b'G1MG'})
    if args.import_vgmap:
        sections = sections.union({b'G1MS', b'G1MG', b'G1MF'})
    verbosity = args.verbose
    if not verbosity and not sections and not args.test:
        verbosity = 1

    return (args, sections)

def main_standalone():
    args, sections=  parse_args()
    for arg in args.files:
        print('Parsing %s...' % arg)
        g1m = G1MFile(open(arg, 'rb'), sections)

        if args.test:
            buf = io.BytesIO()
            g1m.write(buf)
            print('Writing %s...' % (arg + '.TEST'))
            open(arg + '.TEST', 'wb').write(buf.getvalue())
            assert(open(arg, 'rb').read() == buf.getvalue())
            print('Test #1 succeeded')

        if args.import_vgmap:
            g1m.import_vgmaps()
            if not os.path.exists(arg + '.bak'):
                try:
                    os.rename(arg, arg + '.bak')
                except OSError:
                    pass
            print('Writing %s...' % arg)
            g1m.write(open(arg, 'wb'))

        if args.test:
            buf = io.BytesIO()
            g1m.write(buf)
            buf.name = arg
            buf.seek(0)
            G1MFile(buf, None)
            print('Test #2 succeeded')

        if args.export_vgmap:
            g1m.export_vgmaps()

if 'bpy' in globals():
    class ImportDOA6Soft(bpy.types.Operator, bpy_extras.io_utils.ImportHelper):
        """Import DOA6 Soft Body nodes"""
        bl_idname = "import_mesh.doa6_soft"
        bl_label = "Import DOA6 Soft Body nodes"
        bl_options = {'UNDO'}

        filename_ext = '.g1m'
        filter_glob = bpy.props.StringProperty(
                default='*.g1m',
                options={'HIDDEN'},
                )

        def execute(self, context):
            G1MFile(open(self.filepath, 'rb'), {b'SOFT'})
            return {'FINISHED'}

    class UpdateDOA6Soft(bpy.types.Operator):
        """Update DOA6 soft body vertex positions"""
        bl_idname = "mesh.update_doa6_soft_body"
        bl_label = "Update DOA6 soft body vertex positions"
        bl_options = {'UNDO'}

        WeightedNode = collections.namedtuple('WeightedNode', ['pos', 'weights'])

        def find_targets(self, context):
            grid = None
            targets = []
            for obj in context.selected_objects:
                if obj.name.find('.SOFT[') != -1:
                    while obj.parent and obj.parent.name.find('.SOFT['):
                        obj = obj.parent
                    if grid and grid != obj:
                        raise Fatal('Multiple soft body grids selected')
                    grid = obj
                else:
                    if set(['TEXCOORD%u.%s'%(x,y) for x in (8, 9) for y in ('xy', 'zw')]).difference(obj.data.uv_layers.keys()):
                        raise Fatal('Selected object does not have expected TEXCOORD8+9 UV layers')
                    if set(['%s.%s'%(x,y) for x in ('PSIZE', 'FOG') for y in 'xyzw']).difference(obj.data.vertex_layers_int.keys()):
                        raise Fatal('Selected object does not have expected PSIZE & FOG integer vertex layers')
                    if 'SAMPLE.x' not in obj.data.vertex_layers_float.keys():
                        raise Fatal('Selected object does not have expected SAMPLE float vertex layer')
                    targets.append(obj)
            if not grid:
                raise Fatal('No soft body grids selected')
            if not targets:
                raise Fatal('No target meshes selected')
            return (grid, targets)

        def find_parallel_sides(self, nodes, n):
            # We need to find sides pointing in the same direction. We don't
            # necessarily know what order the nodes are in (though maybe with
            # some analysis of how the nodes are typically layed out we could
            # assume something?), so we will arbitrarily take a vector
            # connecting the first two corners then scan through all
            # permutations of pairs of the remaining corners to locate the
            # three sides that most closely match that vector.
            #
            # Assumes all the sides we are looking for have approximately the
            # same length and direction.
            #
            side1_vec = nodes[1].pos - nodes[0].pos
            other_pairs = itertools.permutations(nodes[2:], 2)
            other_vecs = [(numpy.linalg.norm(y.pos - x.pos - side1_vec), x,y) for x,y in other_pairs]
            other_vecs = sorted(other_vecs, key=lambda x: x[0])[:n-1]
            other_vecs = list(zip(*list(zip(*other_vecs))[1:]))
            return [(nodes[0], nodes[1])] + other_vecs

        @classmethod
        def ratio_along_line(cls, pos, line_pos_1, line_pos_2):
            # Finds how far along a line a given point lies, returning 0.0 at
            # line_pos_1, and 1.0 at line_pos_2, and whatever between them.
            # This point does not need to lie on the line, but the closest
            # point on the line will be considered.
            # EDIT: change this method to vector calculating, instead of degree calculating
            P31 = numpy.array(pos) - line_pos_1
            P21 = line_pos_2 - line_pos_1
            Len21 = numpy.dot(P21,P21)
            if Len21 < 0.001:
               return 0.0
            return numpy.dot(P31,P21) / numpy.dot(P21,P21)

        @staticmethod
        def interpolate_linear(n1, n2, ratio):
            return (1.0-ratio)*n1 + ratio*n2

        @classmethod
        def interpolate_weighted_nodes(cls, n1, n2, ratio):
            pos = cls.interpolate_linear(n1.pos, n2.pos, ratio)
            weights = [cls.interpolate_linear(x,y,ratio) for (x,y) in zip(n1.weights, n2.weights)]
            return cls.WeightedNode(pos, weights)

        def interpolate_weights_linear(self, pos, nodes):
            if len(nodes) != 2:
                self.report({'WARNING'}, 'Vertex at %s surrounded by irregular number of %u nodes' % (pos, len(nodes)))
                return None
            r = self.ratio_along_line(pos, nodes[0].pos, nodes[1].pos)
            # self.report({'INFO'}, 'ratio = %f' % (r))
            interpolated = self.interpolate_weighted_nodes(nodes[0], nodes[1], r)
            return interpolated.weights

        def interpolate_weights_bilinear(self, pos, nodes):
            if len(nodes) != 4:
                return self.interpolate_weights_linear(pos, nodes)
            sides = self.find_parallel_sides(nodes, 2)
            interpolated_line = []
            for n1,n2 in sides:
                r = self.ratio_along_line(pos, n1.pos, n2.pos)
                interpolated = self.interpolate_weighted_nodes(n1, n2, r)
                interpolated_line.append(interpolated)
            return self.interpolate_weights_linear(pos, interpolated_line)

        def interpolate_weights_trilinear(self, pos, nodes):
            if len(nodes) != 8:
                return self.interpolate_weights_bilinear(pos, nodes)
            # The nodes should form a cube or rectangular prism, and we want to
            # interpolate on four sides pointing in one direction to form a
            # square, then on two opposite sides of that square to form a line,
            # then on that line to find a point that should match the vertex
            # position. If we also interpolate weights between the corners,
            # where each corner has its own weight set at 1.0 and all other
            # weights set at 0.0 than then interpolated weights should be
            # usable to reconstruct the vertex location given the corner
            # positions.
            sides = self.find_parallel_sides(nodes, 4)
            interpolated_square = []
            for n1,n2 in sides:
                # Can probably get away with calculating r once and reusing it
                # for the next three sides, but do it each time allowing for
                # the nodes to not quite form a grid:
                r = self.ratio_along_line(pos, n1.pos, n2.pos)
                interpolated = self.interpolate_weighted_nodes(n1, n2, r)
                interpolated_square.append(interpolated)
            return self.interpolate_weights_bilinear(pos, interpolated_square)

        def update_soft_body_sim(self, grid_parent, target):
            node_locations = numpy.array([ x.location for x in grid_parent.children ])
            node_ids = [ int(x.name.rpartition('[')[2].rstrip(']')) for x in grid_parent.children ]
            #print('Nodes', list(zip(node_ids, node_locations)))

            uv_layer_names = ['TEXCOORD%u.%s'%(x,y) for x in (8, 9) for y in ('xy', 'zw')]
            node_layer_names = ['%s.%s'%(x,y) for x in ('PSIZE', 'FOG') for y in 'xyzw']

            for layer in uv_layer_names:
                try:
                    target['3DMigoto:' + layer]['flip_v'] = False
                except:
                    target['3DMigoto:' + layer] = {'flip_v': False}

            Nodes = collections.namedtuple('Node', ['id', 'dist', 'pos', 'vec'])

            max_errors = [(0.0, None, None, [])]*9
            for l in target.data.loops:
                vertex = target.data.vertices[l.vertex_index]
                vectors = [vertex.co]*len(node_locations) - node_locations

                # numpy.linalg.norm can calculate distance reportedly faster
                # than scipy.spacial.distance.euclidean:
                distances = numpy.linalg.norm(vectors, axis=1)
                sorted_nodes = sorted([Nodes(*x) for x in zip(node_ids, distances, node_locations, vectors)],
                        key=lambda x: x.dist, reverse=True)

                # We need to exclude any nodes in the same direction from this
                # vertex as earlier nodes, so that vertices outside of the grid
                # will only use the nearest four nodes and so that vertices
                # near a node will use the nodes forming a cube around it and
                # not leach over to a neighbouring cube that happens to have a
                # closer node.
                #
                # For each node we are including form a plane intersecting the
                # node with the normal pointing towards this vertex. Nodes that
                # lie on the far side of the plane are excluded. Keep going
                # until we have 8 nodes that should form a cube around the
                # vertex, or have run out of nodes.
                surrounding_nodes = []
                non_adjacent_nodes = []
                while sorted_nodes:
                    node = sorted_nodes.pop()
                    surrounding_nodes.append(node)
                    if len(surrounding_nodes) == 8:
                        non_adjacent_nodes.extend(sorted_nodes)
                        break
                    # To distinguish the two sides of a plane, calculate a
                    # normal n to it at some point p. Then a point v is on the
                    # side where the normal points at if (v−p)⋅n>0 and on the
                    # other side if (v−p)⋅n<0.
                    # - https://math.stackexchange.com/questions/214187/point-on-the-left-or-right-side-of-a-plane-in-3d-space#214194
                    for i, node1 in reversed(list(enumerate(sorted_nodes))):
                        if numpy.dot(node1.pos - node.pos, node.vec) < 0:
                            non_adjacent_nodes.append(sorted_nodes.pop(i))

                num_surrounding_nodes = len(surrounding_nodes)

                # If a vertex falls outside the grid we can try to handle it by
                # finding the nearest cube of nodes and interpolating outside
                # of that cube. For the moment we are only going to attempt
                # this if we have found four surrounding nodes indicating this
                # vertex is on the outside of one of the flat sides of the grid
                # (and not diagonally out from a corner). Find the midpoint of
                # the surrounding nodes and the next nearest four nodes to that
                # point should (hopefully) be the next four points on the cube
                # (if not we could do a cross product of a corner on the square
                # to find the normal than restrict nodes we consider to those
                # that are roughly lined up with one of the corners)
                if num_surrounding_nodes != 8:
                    midpoint = numpy.average([x.pos for x in surrounding_nodes], axis=0)
                    sorted_nodes = sorted(non_adjacent_nodes, key=lambda x: numpy.linalg.norm(x.pos - midpoint))
                    surrounding_nodes = surrounding_nodes[:4]
                    surrounding_nodes.extend(sorted_nodes[:4])

                # Sort by Node ID (probably unnecessary, but puts it in the same
                # order as original for comparison)
                surrounding_nodes = sorted(surrounding_nodes, key=lambda x: x.id)

                weighted_nodes = [self.WeightedNode(n.pos, [0.0]*i + [1.0] + [0.0]*(len(surrounding_nodes)-i-1)) \
                        for i,n in enumerate(surrounding_nodes)]
                weights = self.interpolate_weights_trilinear(vertex.co, weighted_nodes)
                if num_surrounding_nodes != 8:
                    select_set(target.data.vertices[l.vertex_index], True)

                if weights is None:
                    continue

                # Check how accurate our result is - calculate the position
                # based on the node weights and compare it to the vertex
                # position, warning if any vertices are excessively inaccurate.
                pos = numpy.array([0.0,0.0,0.0])
                weightsum = 0.0
                for i,n in enumerate(weighted_nodes):
                    pos += n.pos * weights[i]
                    weightsum += abs(weights[i])
                error = numpy.linalg.norm(pos - vertex.co)
				# output weightsum, weightsum > 1.0 means the vertex is approximately skinned.
                # self.report({'INFO'}, "vertex %u weightsum = %f" % (vertex.index, weightsum))
                if error > max_errors[num_surrounding_nodes][0]:
                    max_errors[num_surrounding_nodes] = (vertex.index, error, vertex.co, pos, weights)

                # Zero out existing weights:
                for i in range(4):
                    target.data.uv_layers[uv_layer_names[i]].data[l.index].uv = (0, 0)

                # Write new soft body node IDs and weights to the vertices:
                for i,n in enumerate(surrounding_nodes):
                    target.data.vertex_layers_int[node_layer_names[i]].data[vertex.index].value = n.id
                    target.data.uv_layers[uv_layer_names[i//2]].data[l.index].uv[i%2] = weights[i]

            for i,max_error in enumerate(max_errors):
                if max_error[1] is not None:
                    self.report({'INFO'}, "Maximum error for %i surrounding nodes: vertex %u off by %f, vertex position %s, calculated position %s, weights %s" % ((i,) + max_error))

        def execute(self, context):
            try:
                grid_parent, targets = self.find_targets(context)

				# deselect vertices and switch to object mode
                if bpy.ops.object.type == 'MESH':
                    bpy.ops.object.mode_set(mode = 'EDIT')
                    bpy.ops.mesh.select_mode(type="VERT")
                    bpy.ops.mesh.select_all(action='DESELECT')
                    bpy.ops.object.mode_set(mode = 'OBJECT')

                for target in targets:
                    self.update_soft_body_sim(grid_parent, target)

            except Fatal as e:
                self.report({'ERROR'}, str(e))

            return {'FINISHED'}

    class ExtractDOA6VGMaps(bpy.types.Operator, bpy_extras.io_utils.ImportHelper):
        """"Extract DOA6 vertex group maps"""
        bl_idname = "misc.extract_doa6_vgmaps"
        bl_label = "Extract DOA6 vertex group maps"
        bl_options = {'UNDO'}

        filename_ext = '.g1m'
        filter_glob = bpy.props.StringProperty(
                default='*.g1m',
                options={'HIDDEN'},
                )

        files = bpy.props.CollectionProperty(
                name="File Path",
                type=bpy.types.OperatorFileListElement,
                )

        def execute(self, context):
            def redirect_print(*args, **kwargs):
                buf = io.StringIO()
                print(*args, file=buf, end='', **kwargs)
                self.report({'INFO'}, buf.getvalue())

            dirname = os.path.dirname(self.filepath)
            for filename in self.files:
                redirect_print('Parsing %s...' % filename.name)
                g1m = G1MFile(open(os.path.join(dirname, filename.name), 'rb'), {b'G1MS', b'G1MG'})
                g1m.export_vgmaps(print=redirect_print)

            return {'FINISHED'}

    def menu_func_import_soft(self, context):
        self.layout.operator(ImportDOA6Soft.bl_idname, text="DOA6 Soft Body (.g1m)")

    register_classes = (
        ImportDOA6Soft,
        UpdateDOA6Soft,
        ExtractDOA6VGMaps,
    )

    def register():
        for cls in register_classes:
            make_annotations(cls)
            bpy.utils.register_class(cls)

        import_menu.append(menu_func_import_soft)

    def unregister():
        for cls in reversed(register_classes):
            bpy.utils.unregister_class(cls)

        import_menu.remove(menu_func_import_soft)

if __name__ == '__main__':
    if 'bpy' in globals():
        register()
    else:
        main_standalone()

 

 

Link to comment

Hi...
Is anyone here using the ryona cheat engine?

Please help...

 

I use this cheat engine:https://github.com/Doa6Vr/Doa6Vr/releases/tag/0.35

Well...Although most functions can be used...
But the game speed and game pause does not work...

This makes me distressed...I want to take pictures in the freezing of time...

What is wrong?
Please help me!!!

 

The controller I use is Xbox 360 controller

 

1375743485_bandicam2021-01-2501-17-23-618.png.cea4a0a271956da1cb0ca8ebd34eb1d9.png

Link to comment
On 10/9/2020 at 3:35 PM, fgh1t6 said:

Well done!

Thank you, you've been a lot of help!

 

It's certainly been a while, but I never did finally finish this dumb quest of getting Leifang in just shoes and gloves. It's been months, off and on, of trying to get this to work, and I just can't. I'm pulling my hair out here.
 

Following your instructions, I was able to remap the glove and shoe textures to the correct places. Only now, nearly her entire body is the wrong texture.

 

My current results:

Spoiler

20210201035422_1.thumb.jpg.7bcf752f62da0cbb1d07b9522384096a.jpg



I think I know what happened.

I merged (CTRL+J) the body mesh with the gloves/shoes mesh. Since they're all one, they can't be assigned different textures. Either body or glove/shoes.

I thought this was a bad idea, but I couldn't get it to work with any of saafrat's nude meshes (.vb + .ib). Ideally, the clothing should be on it's own layer that's weighted to the body so that it can be separate but move together.

 

The ones I tried to replace were [2.vb, 3.vb] since they seemed to contain floating vertices of belts, boots, and cuffs that I wasn't using (that I know of). These have not worked so far.

 

[1.vb, 2.vb] seem to be breast meshes, yet are just floating vertices while [4.vb, 5.vb] are the different breast meshes but skinned.
[1.vb, 2.vb] seem to be part of a shirt. [4.vb, 5.vb] seem the breasts proper.

 

 

Spoiler

863263193_leifangvertices.jpg.6c7fcf52957d2aa30d4e97ea200855b6.jpg1098888861_leifangvertices2.jpg.c6608447863b460c1c4f44522496e764.jpg

 

I've been writing the steps I've followed while making this happen. If nothing else, maybe someone could find my notes useful.

 

They are as follows:
 

Spoiler

File types:

  • .g1t (compiled texture file)

  • .g1m (compiled mesh file)

  • .g1m.xml (text editable file extracted from .g1m file)

  • .mtl (matieral file)

  • .mtl.txt (text editable file extracted .mtl file)

  • .vb (3D mesh file)

GOAL:
        Apply new clothing to saafrat's nude base.
        Character: Leifang

        Adding: Shoes and gloves from LEI_COS_004.
 

        (REDELBE installed and working.)

  • Extract .g1m files from CharacterEditor.rdb.bin from DEAD OR ALIVE 6 install directory via rdbtool (click and drag).

  • Select the 'main .g1m' file that has the majority of parts needed. For me, this is saafrat's nude body mesh.

  • Export via click-and-dragging the .g1m over the .g1m_export_with_vgmap.bat.

  • Install 3DMigoto plugin to Blender 2.7.

  • NOTE: Edit in Blender 2.7 specifically or else 3DMigoto will not work!

  • Import all .vb and .ib files of the mesh with the desired parts as 3DMigoto raw buffers (.vb + .ib)

  • Make edits. Delete whatever is unneeded.

  • NOTE: In Blender, change to Edit Mode. Turn off “Limit selection to visible.” Short cut 'B' for box selection.

  • Transfer bone weights:

    • Select upside-down triangle icon under the meshes dock ('Data Object data'). While new imported mesh is selected, open the 'Vertex Group' tab and remove all vertex groups with the black arrow icon → Delete All Groups

    • Select mesh tranferring weights from (source body mesh), then shift-select the mesh transfering weiths to (destination clothing mesh).

    • NOTE: Make sure source and destination meshes are visible!

    • Select Weight Paint Mode.

    • Go to the Weights Tab.

    • Make sure “Create Data” is checked.

    • Vertex Mapping: Nearest Face Interpolated

    • Source Layers Selection: By Name

    • NOTE: Make sure the body mesh is unedited! (maybe?)

  • Press 'space'. Type “Assign new 3D Migoto vertex groups”.

  • Vertex group step '3'

  • Export as 3DMigoto raw buffers (.vb + .ib)

  • Import (only?) the .vb file back into original 'main .g1m' file via g1m_import.exe. The base .g1m file should still be next to the exported folder. Copy and replace the new exported .vb file to the export folder. Click and drag the .g1m over import_g1m.exe.

  • ERROR: Exported meshes crash the game upon selecting.
    This didn't happen originally, as I have a working model (see above; merged body and shoe/glove mesh). I don't know what I'm doing wrong here.

 

Assigning Textures:

  • Extract .mtl files from MaterialEditor.rdb.bin from DEAD OR ALIVE 6 install directory via rdbtool (click and drag).

  • Extract .mtl via fid_utility.exe for .mtl.txt file

  • Refer to .mtl.txt for materials assigned to which mesh set.

  • Identify materials (ex: MPR[...]COS004_a01,0 means a01 (gloves/shoes mesh) is assigned material/matpalid 0.

  • Extract .g1m.xml via g1m.xml

  • Refer to .g1m.xml. Identify submesh. (ex: <Submesh idx=”10”> Same as ###.vb.

  • Change 'matpalid value' in .g1m.xml to that of desired texture (ex: a01's 0 for gloves and shoes texture on 10.vb)

  • Scroll down to next subsection
    <MeshesSection>
    <LodGroup idx=”0” auto=”true”>
    <Mesh idx=”
    #”>

  • # = block with mesh idx (ex: <Mesh idx=”9”> […] <Submeshes value=”10” />) Mesh idx is a mesh block, not the submesh itself. The Submesh value is matches the Submesh idx value above.

  • NOTE: <Shader value="@DC694116" /> is a skin shader.

  • NOTE: <Shader value="@1FE387E1" /> is a common default material shader.

  • Refer to original mesh id (ex: 3.vb → <Submeshes value=”3”/><Shader value=”@########” />) where ######## is the original shader id.

  • Import .g1m.xml back into the original .g1m.

  • Place .g1m into 'Character' folder of mod via REDELBE.


Any help would be appreciated. This is killing me.

Link to comment

I would like to venture myself to modding the character model, but I have some question, maybe just one for now.

 

 

Is it possible to mod/edit the sweat drip effect? Not the sweat texture on skin and clothes, but that sweat drops effect on character after the match.

 

It's just only focusing on chin, and I would to make the effect also appear on hair and other body parts like ear, nose, hands, etc, to make the effect more realistic and possibly for cinematic purpose.

 

My idea is to duplicate the effect by putting the sweat trigger to any placement of character model part, but I don't know if this can be done.

 

From what I have seen, the effect seems linked to winning/losing animation, rather than on character model. So, maybe it's involved of editing the animation.

 

Another thing is I miss the sweat flowing effect from previous DOA5. If only there's a way to bring it back but enhance it a bit more, that would be great.

Link to comment
On 1/31/2021 at 7:55 PM, Emiya_Kiristugu said:

Is it possible to make a Tamaki version and a Nico version of this mod?

Do you mean the edition of Nico's own Hot Summer outfit, or do you mean moving exactly this outfit from Momiji to Nico?

I can edit Nico's own Hot Summer costume in the same style as this - to make it topless for example, but i can't move the outfit from one character to another - it is possible only between the characters which have exactly similar shape of the body.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   1 member

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. For more information, see our Privacy Policy & Terms of Use